Sample records for integration time step

  1. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  2. A high-order relaxation method with projective integration for solving nonlinear systems of hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Lafitte, Pauline; Melis, Ward; Samaey, Giovanni

    2017-07-01

    We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.

  3. Evaluation of atomic pressure in the multiple time-step integration algorithm.

    PubMed

    Andoh, Yoshimichi; Yoshii, Noriyuki; Yamada, Atsushi; Okazaki, Susumu

    2017-04-15

    In molecular dynamics (MD) calculations, reduction in calculation time per MD loop is essential. A multiple time-step (MTS) integration algorithm, the RESPA (Tuckerman and Berne, J. Chem. Phys. 1992, 97, 1990-2001), enables reductions in calculation time by decreasing the frequency of time-consuming long-range interaction calculations. However, the RESPA MTS algorithm involves uncertainties in evaluating the atomic interaction-based pressure (i.e., atomic pressure) of systems with and without holonomic constraints. It is not clear which intermediate forces and constraint forces in the MTS integration procedure should be used to calculate the atomic pressure. In this article, we propose a series of equations to evaluate the atomic pressure in the RESPA MTS integration procedure on the basis of its equivalence to the Velocity-Verlet integration procedure with a single time step (STS). The equations guarantee time-reversibility even for the system with holonomic constrants. Furthermore, we generalize the equations to both (i) arbitrary number of inner time steps and (ii) arbitrary number of force components (RESPA levels). The atomic pressure calculated by our equations with the MTS integration shows excellent agreement with the reference value with the STS, whereas pressures calculated using the conventional ad hoc equations deviated from it. Our equations can be extended straightforwardly to the MTS integration algorithm for the isothermal NVT and isothermal-isobaric NPT ensembles. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Asynchronous collision integrators: Explicit treatment of unilateral contact with friction and nodal restraints

    PubMed Central

    Wolff, Sebastian; Bucher, Christian

    2013-01-01

    This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806

  5. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  6. Time-symmetric integration in astrophysics

    NASA Astrophysics Data System (ADS)

    Hernandez, David M.; Bertschinger, Edmund

    2018-04-01

    Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.

  7. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.

    PubMed

    Shelley, M J; Tao, L

    2001-01-01

    To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.

  8. A transition from using multi-step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies.

    PubMed

    Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-12-01

    The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.

  9. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  10. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  11. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  12. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE PAGES

    Finn, John M.

    2015-03-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less

  13. Optimal Runge-Kutta Schemes for High-order Spatial and Temporal Discretizations

    DTIC Science & Technology

    2015-06-01

    using larger time steps versus lower-order time integration with smaller time steps.4 In the present work, an attempt is made to gener - alize these... generality and because of interest in multi-speed and high Reynolds number, wall-bounded flow regimes, a dual-time framework is adopted in the present work...errors of general combinations of high-order spatial and temporal discretizations. Different Runge-Kutta time integrators are applied to central

  14. Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.

    2004-01-01

    Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.

  15. Driven Langevin systems: fluctuation theorems and faithful dynamics

    NASA Astrophysics Data System (ADS)

    Sivak, David; Chodera, John; Crooks, Gavin

    2014-03-01

    Stochastic differential equations of motion (e.g., Langevin dynamics) provide a popular framework for simulating molecular systems. Any computational algorithm must discretize these equations, yet the resulting finite time step integration schemes suffer from several practical shortcomings. We show how any finite time step Langevin integrator can be thought of as a driven, nonequilibrium physical process. Amended by an appropriate work-like quantity (the shadow work), nonequilibrium fluctuation theorems can characterize or correct for the errors introduced by the use of finite time steps. We also quantify, for the first time, the magnitude of deviations between the sampled stationary distribution and the desired equilibrium distribution for equilibrium Langevin simulations of solvated systems of varying size. We further show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  16. A transition from using multi‐step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies

    PubMed Central

    Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-01-01

    Abstract Introduction The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos® CELLEX® fully integrated system in 2012. This report summarizes our single‐center experience of transitioning from the use of multi‐step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. Materials and Methods The total number of ECP procedures performed 2011–2015 was derived from department records. The time taken to complete a single ECP treatment using a multi‐step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time‐driven activity‐based costing methods were applied to provide a cost comparison. Results The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi‐step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per‐session cost of performing ECP using the multi‐step procedure was greater than with the CELLEX® system (€1,429.37 and €1,264.70 per treatment, respectively). Conclusions For hospitals considering a transition from multi‐step procedures to fully integrated methods for ECP where cost may be a barrier, time‐driven activity‐based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX® allow for more patient treatments per year. PMID:28419561

  17. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  18. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finn, John M., E-mail: finn@lanl.gov

    2015-03-15

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less

  19. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  20. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby; Wang, Zhen; Berrill, Mark A

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  1. Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Martini, Michael C.

    2011-01-01

    A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.

  2. Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.

    2017-10-01

    The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.

  3. Asynchronous variational integration using continuous assumed gradient elements.

    PubMed

    Wolff, Sebastian; Bucher, Christian

    2013-03-01

    Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.

  4. Self-consistent predictor/corrector algorithms for stable and efficient integration of the time-dependent Kohn-Sham equation

    NASA Astrophysics Data System (ADS)

    Zhu, Ying; Herbert, John M.

    2018-01-01

    The "real time" formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.

  5. A point implicit time integration technique for slow transient flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-05-01

    We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less

  6. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.

    Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.

  7. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    DOE PAGES

    Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.

    2017-10-12

    Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.

  8. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  9. Calculating Time-Integral Quantities in Depletion Calculations

    DOE PAGES

    Isotalo, Aarno

    2016-06-02

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  10. Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Chaudhri, Anuj; Lukes, Jennifer R.

    2010-02-01

    The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.

  11. Spatial Data Integration Using Ontology-Based Approach

    NASA Astrophysics Data System (ADS)

    Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.

    2015-12-01

    In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  12. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  13. Evaluating Computer Integration in the Elementary School: A Step-by-Step Guide.

    ERIC Educational Resources Information Center

    Mowe, Richard

    This handbook was written to enable elementary school educators to conduct formative evaluations of their computer integrated instruction (CII) programs in minimum time. CII is defined as the use of computer software, such as word processing, database, and graphics programs, to help students solve problems or work more productively. The first…

  14. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-01

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  15. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation.

    PubMed

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-28

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  16. Alternative Procedure of Heat Integration Tehnique Election between Two Unit Processes to Improve Energy Saving

    NASA Astrophysics Data System (ADS)

    Santi, S. S.; Renanto; Altway, A.

    2018-01-01

    The energy use system in a production process, in this case heat exchangers networks (HENs), is one element that plays a role in the smoothness and sustainability of the industry itself. Optimizing Heat Exchanger Networks (HENs) from process streams can have a major effect on the economic value of an industry as a whole. So the solving of design problems with heat integration becomes an important requirement. In a plant, heat integration can be carried out internally or in combination between process units. However, steps in the determination of suitable heat integration techniques require long calculations and require a long time. In this paper, we propose an alternative step in determining heat integration technique by investigating 6 hypothetical units using Pinch Analysis approach with objective function energy target and total annual cost target. The six hypothetical units consist of units A, B, C, D, E, and F, where each unit has the location of different process streams to the temperature pinch. The result is a potential heat integration (ΔH’) formula that can trim conventional steps from 7 steps to just 3 steps. While the determination of the preferred heat integration technique is to calculate the potential of heat integration (ΔH’) between the hypothetical process units. Completion of calculation using matlab language programming.

  17. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  18. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  19. Time step rescaling recovers continuous-time dynamical properties for discrete-time Langevin integration of nonequilibrium systems.

    PubMed

    Sivak, David A; Chodera, John D; Crooks, Gavin E

    2014-06-19

    When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  20. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    NASA Astrophysics Data System (ADS)

    Särkimäki, K.; Hirvijoki, E.; Terävä, J.

    2018-01-01

    We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.

  1. How to Start, Build and Sustain a Multi-Credit Integrated Curriculum Package

    ERIC Educational Resources Information Center

    Wilson, Andrew Kerr

    2011-01-01

    Over the last 17 years, the author has been asked many times for help starting Integrated Curriculum Programs (ICPs). Visiting teachers are instantly engaged and enthused by the possibilities they see in his program. In this article, he provides a step-by-step road map to a full ICP continuum. By distilling the lessons of almost 20 years of…

  2. Surfing on the edge: chaos versus near-integrability in the system of Jovian planets

    NASA Astrophysics Data System (ADS)

    Hayes, Wayne B.

    2008-05-01

    We demonstrate that the system of Sun and Jovian planets, integrated for 200Myr as an isolated five-body system using many sets of initial conditions all within the uncertainty bounds of their currently known positions, can display both chaos and near-integrability. The conclusion is consistent across four different integrators, including several comparisons against integrations utilizing quadruple precision. We demonstrate that the Wisdom-Holman symplectic map using simple symplectic correctors as implemented in MERCURY 6.2 gives a reliable characterization of the existence of chaos for a particular initial condition only with time-steps less than about 10d, corresponding to about 400 steps per orbit. We also integrate the canonical DE405 initial condition out to 5Gyr, and show that it has a Lyapunov time of 200-400Myr, opening the remote possibility of accurate prediction of the Jovian planetary positions for 5Gyr.

  3. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  4. Numerical calculations of velocity and pressure distribution around oscillating airfoils

    NASA Technical Reports Server (NTRS)

    Bratanow, T.; Ecer, A.; Kobiske, M.

    1974-01-01

    An analytical procedure based on the Navier-Stokes equations was developed for analyzing and representing properties of unsteady viscous flow around oscillating obstacles. A variational formulation of the vorticity transport equation was discretized in finite element form and integrated numerically. At each time step of the numerical integration, the velocity field around the obstacle was determined for the instantaneous vorticity distribution from the finite element solution of Poisson's equation. The time-dependent boundary conditions around the oscillating obstacle were introduced as external constraints, using the Lagrangian Multiplier Technique, at each time step of the numerical integration. The procedure was then applied for determining pressures around obstacles oscillating in unsteady flow. The obtained results for a cylinder and an airfoil were illustrated in the form of streamlines and vorticity and pressure distributions.

  5. Algorithms and software for nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.

    1989-01-01

    The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.

  6. Variable Step Integration Coupled with the Method of Characteristics Solution for Water-Hammer Analysis, A Case Study

    NASA Technical Reports Server (NTRS)

    Turpin, Jason B.

    2004-01-01

    One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.

  7. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    NASA Astrophysics Data System (ADS)

    Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  8. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  9. Time Evolution of Integrated Precipitable Water over French Polynesia from 1974 to 2017: Metrological Analysis and Correlation with Climate Evolution

    NASA Astrophysics Data System (ADS)

    Zhang, F.; Barriot, J. P.; Maamaatuaiahutapu, K.; Sichoix, L.; Xu, G., Sr.

    2017-12-01

    In order to better understand and predict the complex meteorological context of French Polynesia, we focus on the time evolution of Integrated Precipitable Water (PW) using Radiosoundings (RS) data from 1974 to 2017. In a first step, we make a comparison over selected months between the PW estimate reconstructed from raw two seconds acquisition and the PW estimate reconstructed from the highly compressed and undersampled Integrated Global Radiosonde Archive (IGRA). In a second step, we make a comparison with other techniques of PW acquisition (radio delays, temperature of sky, infrared bands absorption) in order to assess the intrinsic biases of RS acquisition. In a last step, we analyze the PW time series in our area validated at the light of the first and second step, w.r.t seasonality (dry season and wet season) and spatial location. During the wet season (November to April), the PW values are higher than the corresponding values observed during the dry season (May to October). The PW values are smaller with higher latitudes, but there are higher PW values in Tahiti than in other islands because of the presence of the South Pacific Convergence Zone (SPCZ) around Tahiti. All the PW time series show the same uptrend in French Polynesia in recent years. This study provides further evidence that the PW time series derived from RS can be assimilated in weather forecasting and climate warming models.

  10. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  11. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  12. Microphysical Timescales in Clouds and their Application in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xiping; Tao, Wei-Kuo; Simpson, Joanne

    2007-01-01

    Independent prognostic variables in cloud-resolving modeling are chosen on the basis of the analysis of microphysical timescales in clouds versus a time step for numerical integration. Two of them are the moist entropy and the total mixing ratio of airborne water with no contributions from precipitating particles. As a result, temperature can be diagnosed easily from those prognostic variables, and cloud microphysics be separated (or modularized) from moist thermodynamics. Numerical comparison experiments show that those prognostic variables can work well while a large time step (e.g., 10 s) is used for numerical integration.

  13. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  14. The precision of locomotor odometry in humans.

    PubMed

    Durgin, Frank H; Akagi, Mikio; Gallistel, Charles R; Haiken, Woody

    2009-03-01

    Two experiments measured the human ability to reproduce locomotor distances of 4.6-100 m without visual feedback and compared distance production with time production. Subjects were not permitted to count steps. It was found that the precision of human odometry follows Weber's law that variability is proportional to distance. The coefficients of variation for distance production were much lower than those measured for time production for similar durations. Gait parameters recorded during the task (average step length and step frequency) were found to be even less variable suggesting that step integration could be the basis for non-visual human odometry.

  15. About My Father's Work: A Vehicle for the Integration of Catholic Values.

    ERIC Educational Resources Information Center

    Krebbs, Mary Jane

    2001-01-01

    Asserts that it is time to formalize the practices involved in integrating values into Catholic education. Presents the six-step Educational Community Opportunity for Stewardship (ECOS) system and the Catholic Education Community manual as vital components in preparing teachers for values integration. Presents conceptual and organizational…

  16. Solution of the Average-Passage Equations for the Incompressible Flow through Multiple-Blade-Row Turbomachinery

    DTIC Science & Technology

    1994-02-01

    numerical treatment. An explicit numerical procedure based on Runqe-Kutta time stepping for cell-centered, hexahedral finite volumes is...An explicit numerical procedure based on Runge-Kutta time stepping for cell-centered, hexahedral finite volumes is outlined for the approximate...Discretization 16 3.1 Cell-Centered Finite -Volume Discretization in Space 16 3.2 Artificial Dissipation 17 3.3 Time Integration 21 3.4 Convergence

  17. Integrals for IBS and beam cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.; /Fermilab

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  18. Integrals for IBS and Beam Cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  19. Progress in development of HEDP capabilities in FLASH's Unsplit Staggered Mesh MHD solver

    NASA Astrophysics Data System (ADS)

    Lee, D.; Xia, G.; Daley, C.; Dubey, A.; Gopal, S.; Graziani, C.; Lamb, D.; Weide, K.

    2011-11-01

    FLASH is a publicly available astrophysical community code designed to solve highly compressible multi-physics reactive flows. We are adding capabilities to FLASH that will make it an open science code for the academic HEDP community. Among many important numerical requirements, we consider the following features to be important components necessary to meet our goals for FLASH as an HEDP open toolset. First, we are developing computationally efficient time-stepping integration methods that overcome the stiffness that arises in the equations describing a physical problem when there are disparate time scales. To this end, we are adding two different time-stepping schemes to FLASH that relax the time step limit when diffusive effects are present: an explicit super-time-stepping algorithm (Alexiades et al. in Com. Num. Mech. Eng. 12:31-42, 1996) and a Jacobian-Free Newton-Krylov implicit formulation. These two methods will be integrated into a robust, efficient, and high-order accurate Unsplit Staggered Mesh MHD (USM) solver (Lee and Deane in J. Comput. Phys. 227, 2009). Second, we have implemented an anisotropic Spitzer-Braginskii conductivity model to treat thermal heat conduction along magnetic field lines. Finally, we are implementing the Biermann Battery term to account for spontaneous generation of magnetic fields in the presence of non-parallel temperature and density gradients.

  20. Stability of numerical integration techniques for transient rotor dynamics

    NASA Technical Reports Server (NTRS)

    Kascak, A. F.

    1977-01-01

    A finite element model of a rotor bearing system was analyzed to determine the stability limits of the forward, backward, and centered Euler; Runge-Kutta; Milne; and Adams numerical integration techniques. The analysis concludes that the highest frequency mode determines the maximum time step for a stable solution. Thus, the number of mass elements should be minimized. Increasing the damping can sometimes cause numerical instability. For a uniform shaft, with 10 mass elements, operating at approximately the first critical speed, the maximum time step for the Runge-Kutta, Milne, and Adams methods is that which corresponds to approximately 1 degree of shaft movement. This is independent of rotor dimensions.

  1. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  2. USGS perspectives on an integrated approach to watershed and coastal management

    USGS Publications Warehouse

    Larsen, Matthew C.; Hamilton, Pixie A.; Haines, John W.; Mason, Jr., Robert R.

    2010-01-01

    The writers discuss three critically important steps necessary for achieving the goal for improved integrated approaches on watershed and coastal protection and management. These steps involve modernization of monitoring networks, creation of common data and web services infrastructures, and development of modeling, assessment, and research tools. Long-term monitoring is needed for tracking the effectiveness approaches for controlling land-based sources of nutrients, contaminants, and invasive species. The integration of mapping and monitoring with conceptual and mathematical models, and multidisciplinary assessments is important in making well-informed decisions. Moreover, a better integrated data network is essential for mapping, statistical, and modeling applications, and timely dissemination of data and information products to a broad community of users.

  3. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE PAGES

    Steyer, Andrew J.; Van Vleck, Erik S.

    2018-04-13

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  4. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steyer, Andrew J.; Van Vleck, Erik S.

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  5. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.

  6. MTS-MD of Biomolecules Steered with 3D-RISM-KH Mean Solvation Forces Accelerated with Generalized Solvation Force Extrapolation.

    PubMed

    Omelyan, Igor; Kovalenko, Andriy

    2015-04-14

    We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD with explicit solvent. We have been able to fold the miniprotein from a fully denatured, extended state in about 60 ns of quasidynamics steered with 3D-RISM-KH mean solvation forces, compared to the average physical folding time of 4-9 μs observed in experiment.

  7. Efficient variable time-stepping scheme for intense field-atom interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Kosloff, R.

    1993-03-01

    The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less

  8. Molecular dynamics at low time resolution.

    PubMed

    Faccioli, P

    2010-10-28

    The internal dynamics of macromolecular systems is characterized by widely separated time scales, ranging from fraction of picoseconds to nanoseconds. In ordinary molecular dynamics simulations, the elementary time step Δt used to integrate the equation of motion needs to be chosen much smaller of the shortest time scale in order not to cut-off physical effects. We show that in systems obeying the overdamped Langevin equation, it is possible to systematically correct for such discretization errors. This is done by analytically averaging out the fast molecular dynamics which occurs at time scales smaller than Δt, using a renormalization group based technique. Such a procedure gives raise to a time-dependent calculable correction to the diffusion coefficient. The resulting effective Langevin equation describes by construction the same long-time dynamics, but has a lower time resolution power, hence it can be integrated using larger time steps Δt. We illustrate and validate this method by studying the diffusion of a point-particle in a one-dimensional toy model and the denaturation of a protein.

  9. Scaled Runge-Kutta algorithms for handling dense output

    NASA Technical Reports Server (NTRS)

    Horn, M. K.

    1981-01-01

    Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.

  10. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  11. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  12. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  13. On computational methods for crashworthiness

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1992-01-01

    The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.

  14. Exploring inductive linearization for pharmacokinetic-pharmacodynamic systems of nonlinear ordinary differential equations.

    PubMed

    Hasegawa, Chihiro; Duffull, Stephen B

    2018-02-01

    Pharmacokinetic-pharmacodynamic systems are often expressed with nonlinear ordinary differential equations (ODEs). While there are numerous methods to solve such ODEs these methods generally rely on time-stepping solutions (e.g. Runge-Kutta) which need to be matched to the characteristics of the problem at hand. The primary aim of this study was to explore the performance of an inductive approximation which iteratively converts nonlinear ODEs to linear time-varying systems which can then be solved algebraically or numerically. The inductive approximation is applied to three examples, a simple nonlinear pharmacokinetic model with Michaelis-Menten elimination (E1), an integrated glucose-insulin model and an HIV viral load model with recursive feedback systems (E2 and E3, respectively). The secondary aim of this study was to explore the potential advantages of analytically solving linearized ODEs with two examples, again E3 with stiff differential equations and a turnover model of luteinizing hormone with a surge function (E4). The inductive linearization coupled with a matrix exponential solution provided accurate predictions for all examples with comparable solution time to the matched time-stepping solutions for nonlinear ODEs. The time-stepping solutions however did not perform well for E4, particularly when the surge was approximated by a square wave. In circumstances when either a linear ODE is particularly desirable or the uncertainty in matching the integrator to the ODE system is of potential risk, then the inductive approximation method coupled with an analytical integration method would be an appropriate alternative.

  15. The Marginal Teacher: A Step-by-Step Guide to Fair Procedures for Identification and Dismissal

    ERIC Educational Resources Information Center

    Lawrence, C. Edward

    2005-01-01

    This third edition offers timely solutions for successfully dealing with marginal teachers. Lawrence illustrates the proper actions that principals should integrate into the evaluation processes to successfully gather documentation to help improve or terminate an ineffective teacher. Complete with tools and resources to streamline the evaluation…

  16. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  17. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  18. Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Debojyoti; Constantinescu, Emil M.

    2016-06-23

    Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less

  19. Enhancing systematic implementation of skills training modules for persons with schizophrenia: three steps forward and two steps back?

    PubMed

    van Erp, Nicole H J; van Vugt, Maaike; Verhoeven, Dorien; Kroon, Hans

    2009-01-01

    This brief report addresses the systematic implementation of skills training modules for persons with schizophrenia or related disorders in three Dutch mental health agencies. Information on barriers, strategies and integration into routine daily practice was gathered at 0, 12 and 24 months through interviews with managers, program leaders, trainers, practitioners and clients. Overall implementation of the skills training modules for 74% of the persons with schizophrenia or related disorders was not feasible. Implementation was impeded by an incapable program leader, organizational changes, disappointing referrals and loss of trainers. The agencies made important steps forward to integrate the modules into routine daily practice. A reach percentage of 74% in two years time is too ambitious and needs to be adjusted. Systematic integration of the modules into routine daily practice is feasible, but requires solid program management and continuous effort to involve clients and practitioners.

  20. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  1. Finite element computation of a viscous compressible free shear flow governed by the time dependent Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.; Blanchard, D. K.

    1975-01-01

    A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.

  2. Development, validation and operating room-transfer of a six-step laparoscopic training program for the vesicourethral anastomosis.

    PubMed

    Klein, Jan; Teber, Dogu; Frede, Tom; Stock, Christian; Hruza, Marcel; Gözen, Ali; Seemann, Othmar; Schulze, Michael; Rassweiler, Jens

    2013-03-01

    Development and full validation of a laparoscopic training program for stepwise learning of a reproducible application of a standardized laparoscopic anastomosis technique and integration into the clinical course. The training of vesicourethral anastomosis (VUA) was divided into six simple standardized steps. To fix the objective criteria, four experienced surgeons performed the stepwise training protocol. Thirty-eight participants with no previous laparoscopic experience were investigated in their training performance. The times needed to manage each training step and the total training time were recorded. The integration into the clinical course was investigated. The training results and the corresponding steps during laparoscopic radical prostatectomy (LRP) were analyzed. Data analysis of corresponding operating room (OR) sections of 793 LRP was performed. Based on the validity, criteria were determined. In the laboratory section, a significant reduction of OR time for every step was seen in all participants. Coordination: 62%; longitudinal incision: 52%; inverted U-shape incision: 43%; plexus: 47%. Anastomosis catheter model: 38%. VUA: 38%. The laboratory section required a total time of 29 hours (minimum: 16 hours; maximum: 42 hours). All participants had shorter execution times in the laboratory than under real conditions. The best match was found within the VUA model. To perform an anastomosis under real conditions, 25% more time was needed. By using the training protocol, the performance of the VUA is comparable to that of an surgeon with experience of about 50 laparoscopic VUA. Data analysis proved content, construct, and prognostic validity. The use of stepwise training approaches enables a surgeon to learn and reproduce complex reconstructive surgical tasks: eg, the VUA in a safe environment. The validity of the designed system is given at all levels and should be used as a standard in the clinical surgical training in laparoscopic reconstructive urology.

  3. Method and apparatus for in-system redundant array repair on integrated circuits

    DOEpatents

    Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN

    2008-07-29

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  4. Method and apparatus for in-system redundant array repair on integrated circuits

    DOEpatents

    Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN

    2008-07-08

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  5. Method and apparatus for in-system redundant array repair on integrated circuits

    DOEpatents

    Bright, Arthur A.; Crumley, Paul G.; Dombrowa, Marc B.; Douskey, Steven M.; Haring, Rudolf A.; Oakland, Steven F.; Ouellette, Michael R.; Strissel, Scott A.

    2007-12-18

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  6. Comparison of IMRT planning with two-step and one-step optimization: a strategy for improving therapeutic gain and reducing the integral dose

    NASA Astrophysics Data System (ADS)

    Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.

    2009-12-01

    The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.

  7. Stability of mixed time integration schemes for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Lin, J. I.

    1982-01-01

    A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.

  8. Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.

    2016-12-01

    The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187

  9. Application of an Integrated Methodology for Propulsion and Airframe Control Design to a STOVL Aircraft

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Mattern, Duane

    1994-01-01

    An advanced methodology for integrated flight propulsion control (IFPC) design for future aircraft, which will use propulsion system generated forces and moments for enhanced maneuver capabilities, is briefly described. This methodology has the potential to address in a systematic manner the coupling between the airframe and the propulsion subsystems typical of such enhanced maneuverability aircraft. Application of the methodology to a short take-off vertical landing (STOVL) aircraft in the landing approach to hover transition flight phase is presented with brief description of the various steps in the IFPC design methodology. The details of the individual steps have been described in previous publications and the objective of this paper is to focus on how the components of the control system designed at each step integrate into the overall IFPC system. The full nonlinear IFPC system was evaluated extensively in nonreal-time simulations as well as piloted simulations. Results from the nonreal-time evaluations are presented in this paper. Lessons learned from this application study are summarized in terms of areas of potential improvements in the STOVL IFPC design as well as identification of technology development areas to enhance the applicability of the proposed design methodology.

  10. Preconditioned conjugate-gradient methods for low-speed flow calculations

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing

    1993-01-01

    An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations is integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the Lower-Upper Successive Symmetric Over-Relaxation iterative scheme is more efficient than a preconditioner based on Incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional Line Gauss-Seidel Relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver.

  11. Preconditioned Conjugate Gradient methods for low speed flow calculations

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing

    1993-01-01

    An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations are integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and the convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the lower-upper (L-U)-successive symmetric over-relaxation iterative scheme is more efficient than a preconditioner based on incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional line Gauss-Seidel relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver.

  12. Exponential integration algorithms applied to viscoplasticity

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.

  13. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  14. The Crank Nicolson Time Integrator for EMPHASIS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGregor, Duncan Alisdair Odum; Love, Edward; Kramer, Richard Michael Jack

    2018-03-01

    We investigate the use of implicit time integrators for finite element time domain approxi- mations of Maxwell's equations in vacuum. We discretize Maxwell's equations in time using Crank-Nicolson and in 3D space using compatible finite elements. We solve the system by taking a single step of Newton's method and inverting the Eddy-Current Schur complement allowing for the use of standard preconditioning techniques. This approach also generalizes to more complex material models that can include the Unsplit PML. We present verification results and demonstrate performance at CFL numbers up to 1000.

  15. Microchip integrating magnetic nanoparticles for allergy diagnosis.

    PubMed

    Teste, Bruno; Malloggi, Florent; Siaugue, Jean-Michel; Varenne, Anne; Kanoufi, Frederic; Descroix, Stéphanie

    2011-12-21

    We report on the development of a simple and easy to use microchip dedicated to allergy diagnosis. This microchip combines both the advantages of homogeneous immunoassays i.e. species diffusion and heterogeneous immunoassays i.e. easy separation and preconcentration steps. In vitro allergy diagnosis is based on specific Immunoglobulin E (IgE) quantitation, in that way we have developed and integrated magnetic core-shell nanoparticles (MCSNPs) as an IgE capture nanoplatform in a microdevice taking benefit from both their magnetic and colloidal properties. Integrating such immunosupport allows to perform the target analyte (IgE) capture in the colloidal phase thus increasing the analyte capture kinetics since both immunological partners are diffusing during the immune reaction. This colloidal approach improves 1000 times the analyte capture kinetics compared to conventional methods. Moreover, based on the MCSNPs' magnetic properties and on the magnetic chamber we have previously developed the MCSNPs and therefore the target can be confined and preconcentrated within the microdevice prior to the detection step. The MCSNPs preconcentration factor achieved was about 35,000 and allows to reach high sensitivity thus avoiding catalytic amplification during the detection step. The developed microchip offers many advantages: the analytical procedure was fully integrated on-chip, analyses were performed in short assay time (20 min), the sample and reagents consumption was reduced to few microlitres (5 μL) while a low limit of detection can be achieved (about 1 ng mL(-1)).

  16. Impact of digital radiography on clinical workflow.

    PubMed

    May, G A; Deer, D D; Dackiewicz, D

    2000-05-01

    It is commonly accepted that digital radiography (DR) improves workflow and patient throughput compared with traditional film radiography or computed radiography (CR). DR eliminates the film development step and the time to acquire the image from a CR reader. In addition, the wide dynamic range of DR is such that the technologist can perform the quality-control (QC) step directly at the modality in a few seconds, rather than having to transport the newly acquired image to a centralized QC station for review. Furthermore, additional workflow efficiencies can be achieved with DR by employing tight radiology information system (RIS) integration. In the DR imaging environment, this provides for patient demographic information to be automatically downloaded from the RIS to populate the DR Digital Imaging and Communications in Medicine (DICOM) image header. To learn more about this workflow efficiency improvement, we performed a comparative study of workflow steps under three different conditions: traditional film/screen x-ray, DR without RIS integration (ie, manual entry of patient demographics), and DR with RIS integration. This study was performed at the Cleveland Clinic Foundation (Cleveland, OH) using a newly acquired amorphous silicon flat-panel DR system from Canon Medical Systems (Irvine, CA). Our data show that DR without RIS results in substantial workflow savings over traditional film/screen practice. There is an additional 30% reduction in total examination time using DR with RIS integration.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isotalo, Aarno

    A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.; Harrison, D. E. Jr.

    A variable time step integration algorithm for carrying out molecular dynamics simulations of atomic collision cascades is proposed which evaluates the interaction forces only once per time step. The algorithm is tested on some model problems which have exact solutions and is compared against other common methods. These comparisons show that the method has good stability and accuracy. Applications to Ar/sup +/ bombardment of Cu and Si show good accuracy and improved speed to the original method (D. E. Harrison, W. L. Gay, and H. M. Effron, J. Math. Phys. /bold 10/, 1179 (1969)).

  19. Assembly and Multiplex Genome Integration of Metabolic Pathways in Yeast Using CasEMBLR.

    PubMed

    Jakočiūnas, Tadas; Jensen, Emil D; Jensen, Michael K; Keasling, Jay D

    2018-01-01

    Genome integration is a vital step for implementing large biochemical pathways to build a stable microbial cell factory. Although traditional strain construction strategies are well established for the model organism Saccharomyces cerevisiae, recent advances in CRISPR/Cas9-mediated genome engineering allow much higher throughput and robustness in terms of strain construction. In this chapter, we describe CasEMBLR, a highly efficient and marker-free genome engineering method for one-step integration of in vivo assembled expression cassettes in multiple genomic sites simultaneously. CasEMBLR capitalizes on the CRISPR/Cas9 technology to generate double-strand breaks in genomic loci, thus prompting native homologous recombination (HR) machinery to integrate exogenously derived homology templates. As proof-of-principle for microbial cell factory development, CasEMBLR was used for one-step assembly and marker-free integration of the carotenoid pathway from 15 exogenously supplied DNA parts into three targeted genomic loci. As a second proof-of-principle, a total of ten DNA parts were assembled and integrated in two genomic loci to construct a tyrosine production strain, and at the same time knocking out two genes. This new method complements and improves the field of genome engineering in S. cerevisiae by providing a more flexible platform for rapid and precise strain building.

  20. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  1. Women Faculty: Frozen in Time.

    ERIC Educational Resources Information Center

    West, Martha S.

    1995-01-01

    A discussion of the status of women college faculty looks at the slow rate of gender integration in academe, patterns of full-time women faculty in different institution types, strategies for changing the gender imbalance, and further steps for overall diversification of the professoriate. (MSE)

  2. A Time Integration Algorithm Based on the State Transition Matrix for Structures with Time Varying and Nonlinear Properties

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2003-01-01

    A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.

  3. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    PubMed

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGES

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  5. Effects of dual task on turning ability in stroke survivors and older adults.

    PubMed

    Hollands, K L; Agnihotri, D; Tyson, S F

    2014-09-01

    Turning is an integral component of independent mobility in which stroke survivors frequently fall. This study sought to measure the effects of competing cognitive demands on the stepping patterns of stroke survivors, compared to healthy age-match adults, during turning as a putative mechanism for falls. Walking and turning (90°) was assessed under single (walking and turning alone) and dual task (subtracting serial 3s while walking and turning) conditions using an electronic, pressure-sensitive walkway. Dependent measures were time to turn, variability in time to turn, step length, step width and single support time during three steps of the turn. Turning ability in single and dual task conditions was compared between stroke survivors (n=17, mean ± SD: 59 ± 113 months post-stroke, 64 ± 10 years of age) and age-matched healthy counterparts (n=15). Both groups took longer, were more variable, tended to widen the second step and, crucially, increased single support time on the inside leg of the turn while turning and distracted. Increased single support time during turning may represent biomechanical mechanism, within stepping patterns of turning under distraction, for increased risk of falls for both stroke survivors and older adults. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  6. Integrating Security in Real-Time Embedded Systems

    DTIC Science & Technology

    2017-04-26

    b) detect any intrusions/a ttacks once tl1ey occur and (c) keep the overall system safe in the event of an attack. 4. Analysis and evaluation of...beyond), we expanded our work in both security integration and attack mechanisms, and worked on demonstrations and evaluations in hardware. Year I...scheduling for each busy interval w ith the calculated arrival time w indow. Step 1 focuses on the problem of finding the quanti ty of each task

  7. Solving modal equations of motion with initial conditions using MSC/NASTRAN DMAP. Part 1: Implementing exact mode superposition

    NASA Technical Reports Server (NTRS)

    Abdallah, Ayman A.; Barnett, Alan R.; Ibrahim, Omar M.; Manella, Richard T.

    1993-01-01

    Within the MSC/NASTRAN DMAP (Direct Matrix Abstraction Program) module TRD1, solving physical (coupled) or modal (uncoupled) transient equations of motion is performed using the Newmark-Beta or mode superposition algorithms, respectively. For equations of motion with initial conditions, only the Newmark-Beta integration routine has been available in MSC/NASTRAN solution sequences for solving physical systems and in custom DMAP sequences or alters for solving modal systems. In some cases, one difficulty with using the Newmark-Beta method is that the process of selecting suitable integration time steps for obtaining acceptable results is lengthy. In addition, when very small step sizes are required, a large amount of time can be spent integrating the equations of motion. For certain aerospace applications, a significant time savings can be realized when the equations of motion are solved using an exact integration routine instead of the Newmark-Beta numerical algorithm. In order to solve modal equations of motion with initial conditions and take advantage of efficiencies gained when using uncoupled solution algorithms (like that within TRD1), an exact mode superposition method using MSC/NASTRAN DMAP has been developed and successfully implemented as an enhancement to an existing coupled loads methodology at the NASA Lewis Research Center.

  8. An FMS Dynamic Production Scheduling Algorithm Considering Cutting Tool Failure and Cutting Tool Life

    NASA Astrophysics Data System (ADS)

    Setiawan, A.; Wangsaputra, R.; Martawirya, Y. Y.; Halim, A. H.

    2016-02-01

    This paper deals with Flexible Manufacturing System (FMS) production rescheduling due to unavailability of cutting tools caused either of cutting tool failure or life time limit. The FMS consists of parallel identical machines integrated with an automatic material handling system and it runs fully automatically. Each machine has a same cutting tool configuration that consists of different geometrical cutting tool types on each tool magazine. The job usually takes two stages. Each stage has sequential operations allocated to machines considering the cutting tool life. In the real situation, the cutting tool can fail before the cutting tool life is reached. The objective in this paper is to develop a dynamic scheduling algorithm when a cutting tool is broken during unmanned and a rescheduling needed. The algorithm consists of four steps. The first step is generating initial schedule, the second step is determination the cutting tool failure time, the third step is determination of system status at cutting tool failure time and the fourth step is the rescheduling for unfinished jobs. The approaches to solve the problem are complete-reactive scheduling and robust-proactive scheduling. The new schedules result differences starting time and completion time of each operations from the initial schedule.

  9. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1980-01-01

    New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

  10. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  11. Multi-Spatiotemporal Patterns of Residential Burglary Crimes in Chicago: 2006-2016

    NASA Astrophysics Data System (ADS)

    Luo, J.

    2017-10-01

    This research attempts to explore the patterns of burglary crimes at multi-spatiotemporal scales in Chicago between 2006 and 2016. Two spatial scales are investigated that are census block and police beat area. At each spatial scale, three temporal scales are integrated to make spatiotemporal slices: hourly scale with two-hour time step from 12:00am to the end of the day; daily scale with one-day step from Sunday to Saturday within a week; monthly scale with one-month step from January to December. A total of six types of spatiotemporal slices will be created as the base for the analysis. Burglary crimes are spatiotemporally aggregated to spatiotemporal slices based on where and when they occurred. For each type of spatiotemporal slices with burglary occurrences integrated, spatiotemporal neighborhood will be defined and managed in a spatiotemporal matrix. Hot-spot analysis will identify spatiotemporal clusters of each type of spatiotemporal slices. Spatiotemporal trend analysis is conducted to indicate how the clusters shift in space and time. The analysis results will provide helpful information for better target policing and crime prevention policy such as police patrol scheduling regarding times and places covered.

  12. An asymptotic-preserving Lagrangian algorithm for the time-dependent anisotropic heat transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.

    2014-09-01

    We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less

  13. Bidirectional Retroviral Integration Site PCR Methodology and Quantitative Data Analysis Workflow.

    PubMed

    Suryawanshi, Gajendra W; Xu, Song; Xie, Yiming; Chou, Tom; Kim, Namshin; Chen, Irvin S Y; Kim, Sanggu

    2017-06-14

    Integration Site (IS) assays are a critical component of the study of retroviral integration sites and their biological significance. In recent retroviral gene therapy studies, IS assays, in combination with next-generation sequencing, have been used as a cell-tracking tool to characterize clonal stem cell populations sharing the same IS. For the accurate comparison of repopulating stem cell clones within and across different samples, the detection sensitivity, data reproducibility, and high-throughput capacity of the assay are among the most important assay qualities. This work provides a detailed protocol and data analysis workflow for bidirectional IS analysis. The bidirectional assay can simultaneously sequence both upstream and downstream vector-host junctions. Compared to conventional unidirectional IS sequencing approaches, the bidirectional approach significantly improves IS detection rates and the characterization of integration events at both ends of the target DNA. The data analysis pipeline described here accurately identifies and enumerates identical IS sequences through multiple steps of comparison that map IS sequences onto the reference genome and determine sequencing errors. Using an optimized assay procedure, we have recently published the detailed repopulation patterns of thousands of Hematopoietic Stem Cell (HSC) clones following transplant in rhesus macaques, demonstrating for the first time the precise time point of HSC repopulation and the functional heterogeneity of HSCs in the primate system. The following protocol describes the step-by-step experimental procedure and data analysis workflow that accurately identifies and quantifies identical IS sequences.

  14. From proteomics to systems biology: MAPA, MASS WESTERN, PROMEX, and COVAIN as a user-oriented platform.

    PubMed

    Weckwerth, Wolfram; Wienkoop, Stefanie; Hoehenwarter, Wolfgang; Egelhofer, Volker; Sun, Xiaoliang

    2014-01-01

    Genome sequencing and systems biology are revolutionizing life sciences. Proteomics emerged as a fundamental technique of this novel research area as it is the basis for gene function analysis and modeling of dynamic protein networks. Here a complete proteomics platform suited for functional genomics and systems biology is presented. The strategy includes MAPA (mass accuracy precursor alignment; http://www.univie.ac.at/mosys/software.html ) as a rapid exploratory analysis step; MASS WESTERN for targeted proteomics; COVAIN ( http://www.univie.ac.at/mosys/software.html ) for multivariate statistical analysis, data integration, and data mining; and PROMEX ( http://www.univie.ac.at/mosys/databases.html ) as a database module for proteogenomics and proteotypic peptides for targeted analysis. Moreover, the presented platform can also be utilized to integrate metabolomics and transcriptomics data for the analysis of metabolite-protein-transcript correlations and time course analysis using COVAIN. Examples for the integration of MAPA and MASS WESTERN data, proteogenomic and metabolic modeling approaches for functional genomics, phosphoproteomics by integration of MOAC (metal-oxide affinity chromatography) with MAPA, and the integration of metabolomics, transcriptomics, proteomics, and physiological data using this platform are presented. All software and step-by-step tutorials for data processing and data mining can be downloaded from http://www.univie.ac.at/mosys/software.html.

  15. Alcohol and drug treatment involvement, 12-step attendance and abstinence: 9-year cross-lagged analysis of adults in an integrated health plan.

    PubMed

    Witbrodt, Jane; Ye, Yu; Bond, Jason; Chi, Felicia; Weisner, Constance; Mertens, Jennifer

    2014-04-01

    This study explored causal relationships between post-treatment 12-step attendance and abstinence at multiple data waves and examined indirect paths leading from treatment initiation to abstinence 9-years later. Adults (N = 1945) seeking help for alcohol or drug use disorders from integrated healthcare organization outpatient treatment programs were followed at 1-, 5-, 7- and 9-years. Path modeling with cross-lagged partial regression coefficients was used to test causal relationships. Cross-lagged paths indicated greater 12-step attendance during years 1 and 5 and were casually related to past-30-day abstinence at years 5 and 7 respectfully, suggesting 12-step attendance leads to abstinence (but not vice versa) well into the post-treatment period. Some gender differences were found in these relationships. Three significant time-lagged, indirect paths emerged linking treatment duration to year-9 abstinence. Conclusions are discussed in the context of other studies using longitudinal designs. For outpatient clients, results reinforce the value of lengthier treatment duration and 12-step attendance in year 1. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. The Role of Part-Time Employment among Young People with a Non-University Education in Spain

    ERIC Educational Resources Information Center

    Corrales-Herrero, Helena; Rodríguez-Prado, Beatriz

    2016-01-01

    For some people, a part-time job is merely an intermediate state that serves as a "stepping stone" to further employment and makes labour market integration easier. Yet, part-time work also appears in highly unstable careers. The present research aims to determine the role of part-time employment for young people with non-university…

  17. Atomic force microscopic study of step bunching and macrostep formation during the growth of L-arginine phosphate monohydrate single crystals

    NASA Astrophysics Data System (ADS)

    Sangwal, K.; Torrent-Burgues, J.; Sanz, F.; Gorostiza, P.

    1997-02-01

    The experimental results of the formation of step bunches and macrosteps on the {100} face of L-arginine phosphate monohydrate crystals grown from aqueous solutions at different supersaturations studied by using atomic force microscopy are described and discussed. It was observed that (1) the step height does not remain constant with increasing time but fluctuates within a particular range of heights, which depends on the region of step bunches, (2) the maximum height and the slope of bunched steps increases with growth time as well as supersaturation used for growth, and that (3) the slope of steps of relatively small heights is usually low with a value of about 8° and does not depend on the region of formation of step bunches, but the slope of steps of large heights is up to 21°. Analysis of the experimental results showed that (1) at a particular value of supersaturation the ratio of the average step height to the average step spacing is a constant, suggesting that growth of the {100} face of L-arginine phosphate monohydrate crystals occurs by direct integration of growth entities to growth steps, and that (2) the formation of step bunches and macrosteps follows the dynamic theory of faceting, advanced by Vlachos et al.

  18. Multigrid time-accurate integration of Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  19. Integrated method for chaotic time series analysis

    DOEpatents

    Hively, Lee M.; Ng, Esmond G.

    1998-01-01

    Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.

  20. Orbit and uncertainty propagation: a comparison of Gauss-Legendre-, Dormand-Prince-, and Chebyshev-Picard-based approaches

    NASA Astrophysics Data System (ADS)

    Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.

    2014-01-01

    We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.

  1. TEA CO 2 Laser Simulator: A software tool to predict the output pulse characteristics of TEA CO 2 laser

    NASA Astrophysics Data System (ADS)

    Abdul Ghani, B.

    2005-09-01

    "TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.

  2. Solar System Chaos and Orbital Solutions for Paleoclimate Studies: Limits and New Results

    NASA Astrophysics Data System (ADS)

    Zeebe, R. E.

    2017-12-01

    I report results from accurate numerical integrations of Solar System orbits over the past 100 Myr. The simulations used different integrator algorithms, step sizes, and initial conditions (NASA, INPOP), and included effects from general relativity, different models of the Moon, the Sun's quadrupole moment, and up to ten asteroids. In one simulation, I probed the potential effect of a hypothetical Planet 9 on the dynamics of the system. The most expensive integration required 4 months wall-clock time (Bulirsch-Stoer algorithm) and showed a maximum relative energy error < 2.5e{-13} over the past 100 Myr. The difference in Earth's eccentricity (DeE) was used to track the difference between two solutions, which were considered to diverge at time tau when DeE irreversibly crossed 10% of Earth's mean eccentricity ( 0.028 x 0.1). My results indicate that finding a unique orbital solution is limited by initial conditions from current ephemerides to 54 Myr. Bizarrely, the 4-month Bulirsch-Stoer integration and a different integration scheme that required only 5 hours wall-clock time (symplectic, 12-day time step, Moon as a simple quadrupole perturbation), agree to 63 Myr. Solutions including 3 and 10 asteroids diverge at tau 48 Myr. The effect of a hypothetical Planet 9 on DeE becomes discernible at 66 Myr. Using tau as a criterion, the current state-of-the-art solutions all differ from previously published results beyond 50 Myr. The current study provides new orbital solutions for application in geological studies. I will also comment on the prospect of constraining astronomical solutions by geologic data.

  3. Integrated Microfluidic Devices for Automated Microarray-Based Gene Expression and Genotyping Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Robin H.; Lodes, Mike; Fuji, H. Sho; Danley, David; McShea, Andrew

    Microarray assays typically involve multistage sample processing and fluidic handling, which are generally labor-intensive and time-consuming. Automation of these processes would improve robustness, reduce run-to-run and operator-to-operator variation, and reduce costs. In this chapter, a fully integrated and self-contained microfluidic biochip device that has been developed to automate the fluidic handling steps for microarray-based gene expression or genotyping analysis is presented. The device consists of a semiconductor-based CustomArray® chip with 12,000 features and a microfluidic cartridge. The CustomArray was manufactured using a semiconductor-based in situ synthesis technology. The micro-fluidic cartridge consists of microfluidic pumps, mixers, valves, fluid channels, and reagent storage chambers. Microarray hybridization and subsequent fluidic handling and reactions (including a number of washing and labeling steps) were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. Electrochemical micropumps were integrated in the cartridge to provide pumping of liquid solutions. A micromixing technique based on gas bubbling generated by electrochemical micropumps was developed. Low-cost check valves were implemented in the cartridge to prevent cross-talk of the stored reagents. Gene expression study of the human leukemia cell line (K562) and genotyping detection and sequencing of influenza A subtypes have been demonstrated using this integrated biochip platform. For gene expression assays, the microfluidic CustomArray device detected sample RNAs with a concentration as low as 0.375 pM. Detection was quantitative over more than three orders of magnitude. Experiment also showed that chip-to-chip variability was low indicating that the integrated microfluidic devices eliminate manual fluidic handling steps that can be a significant source of variability in genomic analysis. The genotyping results showed that the device identified influenza A hemagglutinin and neuraminidase subtypes and sequenced portions of both genes, demonstrating the potential of integrated microfluidic and microarray technology for multiple virus detection. The device provides a cost-effective solution to eliminate labor-intensive and time-consuming fluidic handling steps and allows microarray-based DNA analysis in a rapid and automated fashion.

  4. What Is Essential in Developmental Evaluation? On Integrity, Fidelity, Adultery, Abstinence, Impotence, Long-Term Commitment, Integrity, and Sensitivity in Implementing Evaluation Models

    ERIC Educational Resources Information Center

    Patton, Michael Quinn

    2016-01-01

    Fidelity concerns the extent to which a specific evaluation sufficiently incorporates the core characteristics of the overall approach to justify labeling that evaluation by its designated name. Fidelity has traditionally meant implementing a model in exactly the same way each time following the prescribed steps and procedures. The essential…

  5. Spike-frequency adaptation in the inferior colliculus.

    PubMed

    Ingham, Neil J; McAlpine, David

    2004-02-01

    We investigated spike-frequency adaptation of neurons sensitive to interaural phase disparities (IPDs) in the inferior colliculus (IC) of urethane-anesthetized guinea pigs using a stimulus paradigm designed to exclude the influence of adaptation below the level of binaural integration. The IPD-step stimulus consists of a binaural 3,000-ms tone, in which the first 1,000 ms is held at a neuron's least favorable ("worst") IPD, adapting out monaural components, before being stepped rapidly to a neuron's most favorable ("best") IPD for 300 ms. After some variable interval (1-1,000 ms), IPD is again stepped to the best IPD for 300 ms, before being returned to a neuron's worst IPD for the remainder of the stimulus. Exponential decay functions fitted to the response to best-IPD steps revealed an average adaptation time constant of 52.9 +/- 26.4 ms. Recovery from adaptation to best IPD steps showed an average time constant of 225.5 +/- 210.2 ms. Recovery time constants were not correlated with adaptation time constants. During the recovery period, adaptation to a 2nd best-IPD step followed similar kinetics to adaptation during the 1st best-IPD step. The mean adaptation time constant at stimulus onset (at worst IPD) was 34.8 +/- 19.7 ms, similar to the 38.4 +/- 22.1 ms recorded to contralateral stimulation alone. Individual time constants after stimulus onset were correlated with each other but not with time constants during the best-IPD step. We conclude that such binaurally derived measures of adaptation reflect processes that occur above the level of exclusively monaural pathways, and subsequent to the site of primary binaural interaction.

  6. Development of a modularized two-step (M2S) chromosome integration technique for integration of multiple transcription units in Saccharomyces cerevisiae.

    PubMed

    Li, Siwei; Ding, Wentao; Zhang, Xueli; Jiang, Huifeng; Bi, Changhao

    2016-01-01

    Saccharomyces cerevisiae has already been used for heterologous production of fuel chemicals and valuable natural products. The establishment of complicated heterologous biosynthetic pathways in S. cerevisiae became the research focus of Synthetic Biology and Metabolic Engineering. Thus, simple and efficient genomic integration techniques of large number of transcription units are demanded urgently. An efficient DNA assembly and chromosomal integration method was created by combining homologous recombination (HR) in S. cerevisiae and Golden Gate DNA assembly method, designated as modularized two-step (M2S) technique. Two major assembly steps are performed consecutively to integrate multiple transcription units simultaneously. In Step 1, Modularized scaffold containing a head-to-head promoter module and a pair of terminators was assembled with two genes. Thus, two transcription units were assembled with Golden Gate method into one scaffold in one reaction. In Step 2, the two transcription units were mixed with modules of selective markers and integration sites and transformed into S. cerevisiae for assembly and integration. In both steps, universal primers were designed for identification of correct clones. Establishment of a functional β-carotene biosynthetic pathway in S. cerevisiae within 5 days demonstrated high efficiency of this method, and a 10-transcriptional-unit pathway integration illustrated the capacity of this method. Modular design of transcription units and integration elements simplified assembly and integration procedure, and eliminated frequent designing and synthesis of DNA fragments in previous methods. Also, by assembling most parts in Step 1 in vitro, the number of DNA cassettes for homologous integration in Step 2 was significantly reduced. Thus, high assembly efficiency, high integration capacity, and low error rate were achieved.

  7. Geometric integration in Born-Oppenheimer molecular dynamics.

    PubMed

    Odell, Anders; Delin, Anna; Johansson, Börje; Cawkwell, Marc J; Niklasson, Anders M N

    2011-12-14

    Geometric integration schemes for extended Lagrangian self-consistent Born-Oppenheimer molecular dynamics, including a weak dissipation to remove numerical noise, are developed and analyzed. The extended Lagrangian framework enables the geometric integration of both the nuclear and electronic degrees of freedom. This provides highly efficient simulations that are stable and energy conserving even under incomplete and approximate self-consistent field (SCF) convergence. We investigate three different geometric integration schemes: (1) regular time reversible Verlet, (2) second order optimal symplectic, and (3) third order optimal symplectic. We look at energy conservation, accuracy, and stability as a function of dissipation, integration time step, and SCF convergence. We find that the inclusion of dissipation in the symplectic integration methods gives an efficient damping of numerical noise or perturbations that otherwise may accumulate from finite arithmetics in a perfect reversible dynamics. © 2011 American Institute of Physics

  8. Advanced in Visualization of 3D Time-Dependent CFD Solutions

    NASA Technical Reports Server (NTRS)

    Lane, David A.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.

  9. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  10. The long-term motion of comet Halley

    NASA Technical Reports Server (NTRS)

    Yeomans, D. K.; Kiang, T.

    1981-01-01

    The orbital motion of comet Halley is numerically integrated back to 1404 BC. Starting with an orbit based on the 1759, 1682, and 1607 observations of the comet, the integration was run back in time with full planetary perturbations and nongravitational forces taken into account at each 0.5 day time-step. Small empirical corrections were made to the computed perihelion passage time in 837 and to the osculating orbital eccentricity in 800. In nine cases, the perihelion passage times calculated by Kiang (1971) from Chinese observations have been redetermined, and osculating orbital elements are given at each apparition from 1910 back to 1404 BC.

  11. An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing

    DTIC Science & Technology

    2002-08-01

    simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital

  12. Odel of Dynamic Integration of Lean Shop Floor Management Within the Organizational Management System

    NASA Astrophysics Data System (ADS)

    Iuga, Virginia; Kifor, Claudiu

    2014-12-01

    The key to achieve a sustainable development lies in the customer satisfaction through improved quality, reduced cost, reduced delivery lead times and proper communication. The objective of the lean manufacturing system (LMS) is to identify and eliminate the processes and resources which do not add value to a product. The following paper aims to present a proposal of further development of integrated management systems in organizations through the implementation of lean shop floor management. In the first part of the paper, a dynamic model of the implementation steps will be presented. Furthermore, the paper underlines the importance of implementing a lean culture parallel with each step of integrating the lean methods and tools. The paper also describes the Toyota philosophy, tools, and the supporting lean culture necessary to implementing an efficient lean system in productive organizations

  13. A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation

    USGS Publications Warehouse

    Smith, Peter E.

    2006-01-01

    A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.

  14. Simplified filtered Smith predictor for MIMO processes with multiple time delays.

    PubMed

    Santos, Tito L M; Torrico, Bismark C; Normey-Rico, Julio E

    2016-11-01

    This paper proposes a simplified tuning strategy for the multivariable filtered Smith predictor. It is shown that offset-free control can be achieved with step references and disturbances regardless of the poles of the primary controller, i.e., integral action is not explicitly required. This strategy reduces the number of design parameters and simplifies tuning procedure because the implicit integrative poles are not considered for design purposes. The simplified approach can be used to design continuous-time or discrete-time controllers. Three case studies are used to illustrate the advantages of the proposed strategy if compared with the standard approach, which is based on the explicit integrative action. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Impaired Response Selection During Stepping Predicts Falls in Older People-A Cohort Study.

    PubMed

    Schoene, Daniel; Delbaere, Kim; Lord, Stephen R

    2017-08-01

    Response inhibition, an important executive function, has been identified as a risk factor for falls in older people. This study investigated whether step tests that include different levels of response inhibition differ in their ability to predict falls and whether such associations are mediated by measures of attention, speed, and/or balance. A cohort study with a 12-month follow-up was conducted in community-dwelling older people without major cognitive and mobility impairments. Participants underwent 3 step tests: (1) choice stepping reaction time (CSRT) requiring rapid decision making and step initiation; (2) inhibitory choice stepping reaction time (iCSRT) requiring additional response inhibition and response-selection (go/no-go); and (3) a Stroop Stepping Test (SST) under congruent and incongruent conditions requiring conflict resolution. Participants also completed tests of processing speed, balance, and attention as potential mediators. Ninety-three of the 212 participants (44%) fell in the follow-up period. Of the step tests, only components of the iCSRT task predicted falls in this time with the relative risk per standard deviation for the reaction time (iCSRT-RT) = 1.23 (95%CI = 1.10-1.37). Multiple mediation analysis indicated that the iCSRT-RT was independently associated with falls and not mediated through slow processing speed, poor balance, or inattention. Combined stepping and response inhibition as measured in a go/no-go test stepping paradigm predicted falls in older people. This suggests that integrity of the response-selection component of a voluntary stepping response is crucial for minimizing fall risk. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  16. Self-powered integrated microfluidic point-of-care low-cost enabling (SIMPLE) chip

    PubMed Central

    Yeh, Erh-Chia; Fu, Chi-Cheng; Hu, Lucy; Thakur, Rohan; Feng, Jeffrey; Lee, Luke P.

    2017-01-01

    Portable, low-cost, and quantitative nucleic acid detection is desirable for point-of-care diagnostics; however, current polymerase chain reaction testing often requires time-consuming multiple steps and costly equipment. We report an integrated microfluidic diagnostic device capable of on-site quantitative nucleic acid detection directly from the blood without separate sample preparation steps. First, we prepatterned the amplification initiator [magnesium acetate (MgOAc)] on the chip to enable digital nucleic acid amplification. Second, a simplified sample preparation step is demonstrated, where the plasma is separated autonomously into 224 microwells (100 nl per well) without any hemolysis. Furthermore, self-powered microfluidic pumping without any external pumps, controllers, or power sources is accomplished by an integrated vacuum battery on the chip. This simple chip allows rapid quantitative digital nucleic acid detection directly from human blood samples (10 to 105 copies of methicillin-resistant Staphylococcus aureus DNA per microliter, ~30 min, via isothermal recombinase polymerase amplification). These autonomous, portable, lab-on-chip technologies provide promising foundations for future low-cost molecular diagnostic assays. PMID:28345028

  17. Computational issues in the simulation of two-dimensional discrete dislocation mechanics

    NASA Astrophysics Data System (ADS)

    Segurado, J.; LLorca, J.; Romero, I.

    2007-06-01

    The effect of the integration time step and the introduction of a cut-off velocity for the dislocation motion was analysed in discrete dislocation dynamics (DD) simulations of a single crystal microbeam. Two loading modes, bending and uniaxial tension, were examined. It was found that a longer integration time step led to a progressive increment of the oscillations in the numerical solution, which would eventually diverge. This problem could be corrected in the simulations carried out in bending by introducing a cut-off velocity for the dislocation motion. This strategy (long integration times and a cut-off velocity for the dislocation motion) did not recover, however, the solution computed with very short time steps in uniaxial tension: the dislocation density was overestimated and the dislocation patterns modified. The different response to the same numerical algorithm was explained in terms of the nature of the dislocations generated in each case: geometrically necessary in bending and statistically stored in tension. The evolution of the dislocation density in the former was controlled by the plastic curvature of the beam and was independent of the details of the simulations. On the contrary, the steady-state dislocation density in tension was determined by the balance between nucleation of dislocations and those which are annihilated or which exit the beam. Changes in the DD imposed by the cut-off velocity altered this equilibrium and the solution. These results point to the need for detailed analyses of the accuracy and stability of the dislocation dynamic simulations to ensure that the results obtained are not fundamentally affected by the numerical strategies used to solve this complex problem.

  18. Integrated method for chaotic time series analysis

    DOEpatents

    Hively, L.M.; Ng, E.G.

    1998-09-29

    Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.

  19. A highly accurate boundary integral equation method for surfactant-laden drops in 3D

    NASA Astrophysics Data System (ADS)

    Sorgentone, Chiara; Tornberg, Anna-Karin

    2018-05-01

    The presence of surfactants alters the dynamics of viscous drops immersed in an ambient viscous fluid. This is specifically true at small scales, such as in applications of droplet based microfluidics, where the interface dynamics become of increased importance. At such small scales, viscous forces dominate and inertial effects are often negligible. Considering Stokes flow, a numerical method based on a boundary integral formulation is presented for simulating 3D drops covered by an insoluble surfactant. The method is able to simulate drops with different viscosities and close interactions, automatically controlling the time step size and maintaining high accuracy also when substantial drop deformation appears. To achieve this, the drop surfaces as well as the surfactant concentration on each surface are represented by spherical harmonics expansions. A novel reparameterization method is introduced to ensure a high-quality representation of the drops also under deformation, specialized quadrature methods for singular and nearly singular integrals that appear in the formulation are evoked and the adaptive time stepping scheme for the coupled drop and surfactant evolution is designed with a preconditioned implicit treatment of the surfactant diffusion.

  20. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure

    NASA Astrophysics Data System (ADS)

    Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2013-10-01

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  1. Real-time control of walking using recordings from dorsal root ganglia.

    PubMed

    Holinski, B J; Everaert, D G; Mushahwar, V K; Stein, R B

    2013-10-01

    The goal of this study was to decode sensory information from the dorsal root ganglia (DRG) in real time, and to use this information to adapt the control of unilateral stepping with a state-based control algorithm consisting of both feed-forward and feedback components. In five anesthetized cats, hind limb stepping on a walkway or treadmill was produced by patterned electrical stimulation of the spinal cord through implanted microwire arrays, while neuronal activity was recorded from the DRG. Different parameters, including distance and tilt of the vector between hip and limb endpoint, integrated gyroscope and ground reaction force were modelled from recorded neural firing rates. These models were then used for closed-loop feedback. Overall, firing-rate-based predictions of kinematic sensors (limb endpoint, integrated gyroscope) were the most accurate with variance accounted for >60% on average. Force prediction had the lowest prediction accuracy (48 ± 13%) but produced the greatest percentage of successful rule activations (96.3%) for stepping under closed-loop feedback control. The prediction of all sensor modalities degraded over time, with the exception of tilt. Sensory feedback from moving limbs would be a desirable component of any neuroprosthetic device designed to restore walking in people after a spinal cord injury. This study provides a proof-of-principle that real-time feedback from the DRG is possible and could form part of a fully implantable neuroprosthetic device with further development.

  2. Two Independent Contributions to Step Variability during Over-Ground Human Walking

    PubMed Central

    Collins, Steven H.; Kuo, Arthur D.

    2013-01-01

    Human walking exhibits small variations in both step length and step width, some of which may be related to active balance control. Lateral balance is thought to require integrative sensorimotor control through adjustment of step width rather than length, contributing to greater variability in step width. Here we propose that step length variations are largely explained by the typical human preference for step length to increase with walking speed, which itself normally exhibits some slow and spontaneous fluctuation. In contrast, step width variations should have little relation to speed if they are produced more for lateral balance. As a test, we examined hundreds of overground walking steps by healthy young adults (N = 14, age < 40 yrs.). We found that slow fluctuations in self-selected walking speed (2.3% coefficient of variation) could explain most of the variance in step length (59%, P < 0.01). The residual variability not explained by speed was small (1.5% coefficient of variation), suggesting that step length is actually quite precise if not for the slow speed fluctuations. Step width varied over faster time scales and was independent of speed fluctuations, with variance 4.3 times greater than that for step length (P < 0.01) after accounting for the speed effect. That difference was further magnified by walking with eyes closed, which appears detrimental to control of lateral balance. Humans appear to modulate fore-aft foot placement in precise accordance with slow fluctuations in walking speed, whereas the variability of lateral foot placement appears more closely related to balance. Step variability is separable in both direction and time scale into balance- and speed-related components. The separation of factors not related to balance may reveal which aspects of walking are most critical for the nervous system to control. PMID:24015308

  3. Nonenzymatic Wearable Sensor for Electrochemical Analysis of Perspiration Glucose.

    PubMed

    Zhu, Xiaofei; Ju, Yinhui; Chen, Jian; Liu, Deye; Liu, Hong

    2018-05-25

    We report a nonenzymatic wearable sensor for electrochemical analysis of perspiration glucose. Multipotential steps are applied on a Au electrode, including a high negative pretreatment potential step for proton reduction which produces a localized alkaline condition, a moderate potential step for electrocatalytic oxidation of glucose under the alkaline condition, and a positive potential step to clean and reactivate the electrode surface for the next detection. Fluorocarbon-based materials were coated on the Au electrode for improving the selectivity and robustness of the sensor. A fully integrated wristband is developed for continuous real-time monitoring of perspiration glucose during physical activities, and uploading the test result to a smartphone app via Bluetooth.

  4. The Technique of Changing the Drive Method of Micro Step Drive and Sensorless Drive for Hybrid Stepping Motor

    NASA Astrophysics Data System (ADS)

    Yoneda, Makoto; Dohmeki, Hideo

    The position control system with the advantage large torque, low vibration, and high resolution can be obtained by the constant current micro step drive applied to hybrid stepping motor. However loss is large, in order not to be concerned with load torque but to control current uniformly. As the one technique of a position control system in which high efficiency is realizable, the same sensorless control as a permanent magnet motor is effective. But, it was the purpose that the control method proposed until now controls speed. Then, this paper proposed changing the drive method of micro step drive and sensorless drive. The change of the drive method was verified from the simulation and the experiment. On no load, it was checked not producing change of a large speed at the time of a change by making electrical angle and carrying out zero reset of the integrator. On load, it was checked that a large speed change arose. The proposed system could change drive method by setting up the initial value of an integrator using the estimated result, without producing speed change. With this technique, the low loss position control system, which employed the advantage of the hybrid stepping motor, has been built.

  5. Fully chip-embedded automation of a multi-step lab-on-a-chip process using a modularized timer circuit.

    PubMed

    Kang, Junsu; Lee, Donghyeon; Heo, Young Jin; Chung, Wan Kyun

    2017-11-07

    For highly-integrated microfluidic systems, an actuation system is necessary to control the flow; however, the bulk of actuation devices including pumps or valves has impeded the broad application of integrated microfluidic systems. Here, we suggest a microfluidic process control method based on built-in microfluidic circuits. The circuit is composed of a fluidic timer circuit and a pneumatic logic circuit. The fluidic timer circuit is a serial connection of modularized timer units, which sequentially pass high pressure to the pneumatic logic circuit. The pneumatic logic circuit is a NOR gate array designed to control the liquid-controlling process. By using the timer circuit as a built-in signal generator, multi-step processes could be done totally inside the microchip without any external controller. The timer circuit uses only two valves per unit, and the number of process steps can be extended without limitation by adding timer units. As a demonstration, an automation chip has been designed for a six-step droplet treatment, which entails 1) loading, 2) separation, 3) reagent injection, 4) incubation, 5) clearing and 6) unloading. Each process was successfully performed for a pre-defined step-time without any external control device.

  6. Event-triggered logical flow control for comprehensive process integration of multi-step assays on centrifugal microfluidic platforms.

    PubMed

    Kinahan, David J; Kearney, Sinéad M; Dimov, Nikolay; Glynn, Macdara T; Ducrée, Jens

    2014-07-07

    The centrifugal "lab-on-a-disc" concept has proven to have great potential for process integration of bioanalytical assays, in particular where ease-of-use, ruggedness, portability, fast turn-around time and cost efficiency are of paramount importance. Yet, as all liquids residing on the disc are exposed to the same centrifugal field, an inherent challenge of these systems remains the automation of multi-step, multi-liquid sample processing and subsequent detection. In order to orchestrate the underlying bioanalytical protocols, an ample palette of rotationally and externally actuated valving schemes has been developed. While excelling with the level of flow control, externally actuated valves require interaction with peripheral instrumentation, thus compromising the conceptual simplicity of the centrifugal platform. In turn, for rotationally controlled schemes, such as common capillary burst valves, typical manufacturing tolerances tend to limit the number of consecutive laboratory unit operations (LUOs) that can be automated on a single disc. In this paper, a major advancement on recently established dissolvable film (DF) valving is presented; for the very first time, a liquid handling sequence can be controlled in response to completion of preceding liquid transfer event, i.e. completely independent of external stimulus or changes in speed of disc rotation. The basic, event-triggered valve configuration is further adapted to leverage conditional, large-scale process integration. First, we demonstrate a fluidic network on a disc encompassing 10 discrete valving steps including logical relationships such as an AND-conditional as well as serial and parallel flow control. Then we present a disc which is capable of implementing common laboratory unit operations such as metering and selective routing of flows. Finally, as a pilot study, these functions are integrated on a single disc to automate a common, multi-step lab protocol for the extraction of total RNA from mammalian cell homogenate.

  7. A paradigm to guide health promotion into the 21st century: the integral idea whose time has come.

    PubMed

    Lundy, Tam

    2010-09-01

    The field of health promotion and education is at a turning point as it steps up to address the interconnected challenges of health, equity and sustainable development. Professionals and policy makers recognize the need for an integrative thinking and practice approach to foster comprehensive and coherent action in each of these complex areas. An integrative approach to policy and practice builds bridges across disciplines and discourses, supporting our efforts to take important next steps to generate sustainability and health for all. Comprehensive and coherent practice requires comprehensive and coherent theory. This article offers a brief introduction to Ken Wilber's influential Integral model, inviting its consideration as a promising paradigmatic framework that can guide thinking, practice, research and evidence as health promotion and education enter a new era. Currently influencing thought and practice leaders in diverse disciplines and sectors, the Integral approach presents a practical response to the current call for cross-disciplinary collaboration to address health, equity and sustainability. In addition, it addresses the disciplinary call for evidence-based practice that is grounded in, and accountable to, robust theoretical foundations.

  8. Effect of production management on semen quality during long-term storage in different European boar studs.

    PubMed

    Schulze, M; Kuster, C; Schäfer, J; Jung, M; Grossfeld, R

    2018-03-01

    The processing of ejaculates is a fundamental step for the fertilizing capacity of boar spermatozoa. The aim of the present study was to identify factors that affect quality of boar semen doses. The production process during 1 day of semen processing in 26 European boar studs was monitored. In each boar stud, nine to 19 randomly selected ejaculates from 372 Pietrain boars were analyzed for sperm motility, acrosome and plasma membrane integrity, mitochondrial activity and thermo-resistance (TRT). Each ejaculate was monitored for production time and temperature for each step in semen processing using the special programmed software SEQU (version 1.7, Minitüb, Tiefenbach, Germany). The dilution of ejaculates with a short-term extender was completed in one step in 10 AI centers (n = 135 ejaculates), in two steps in 11 AI centers (n = 158 ejaculates) and in three steps in five AI centers (n = 79 ejaculates). Results indicated there was a greater semen quality with one-step isothermal dilution compared with the multi-step dilution of AI semen doses (total motility TRT d7: 71.1 ± 19.2%, 64.6 ± 20.0%, 47.1 ± 27.1%; one-step compared with two-step compared with the three-step dilution; P < .05). There was a marked advantage when using the one-step isothermal dilution regarding time management, preservation suitability, stability and stress resistance. One-step dilution caused significant lower holding times of raw ejaculates and reduced the possible risk of making mistakes due to a lower number of processing steps. These results lead to refined recommendations for boar semen processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. User's guide to four-body and three-body trajectory optimization programs

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A collection of computer programs and subroutines written in FORTRAN to calculate 4-body (sun-earth-moon-space) and 3-body (earth-moon-space) optimal trajectories is presented. The programs incorporate a variable step integration technique and a quadrature formula to correct single step errors. The programs provide capability to solve initial value problem, two point boundary value problem of a transfer from a given initial position to a given final position in fixed time, optimal 2-impulse transfer from an earth parking orbit of given inclination to a given final position and velocity in fixed time and optimal 3-impulse transfer from a given position to a given final position and velocity in fixed time.

  10. Multigrid for hypersonic viscous two- and three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Turkel, E.; Swanson, R. C.; Vatsa, V. N.; White, J. A.

    1991-01-01

    The use of a multigrid method with central differencing to solve the Navier-Stokes equations for hypersonic flows is considered. The time dependent form of the equations is integrated with an explicit Runge-Kutta scheme accelerated by local time stepping and implicit residual smoothing. Variable coefficients are developed for the implicit process that removes the diffusion limit on the time step, producing significant improvement in convergence. A numerical dissipation formulation that provides good shock capturing capability for hypersonic flows is presented. This formulation is shown to be a crucial aspect of the multigrid method. Solutions are given for two-dimensional viscous flow over a NACA 0012 airfoil and three-dimensional flow over a blunt biconic.

  11. A far-field non-reflecting boundary condition for two-dimensional wake flows

    NASA Technical Reports Server (NTRS)

    Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli

    1995-01-01

    Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.

  12. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.

  13. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  14. Method and apparatus for characterizing propagation delays of integrated circuit devices

    NASA Technical Reports Server (NTRS)

    Blaes, Brent R. (Inventor); Buehler, Martin G. (Inventor)

    1987-01-01

    Propagation delay of a signal through a channel is measured by cyclically generating a first step-wave signal for transmission through the channel to a two-input logic element and a second step-wave signal with a controlled delay to the second input terminal of the logic element. The logic element determines which signal is present first at its input terminals and stores a binary signal indicative of that determination for control of the delay of the second signal which is advanced or retarded for the next cycle until both the propagation delayed first step-wave signal and the control delayed step-wave signal are coincident. The propagation delay of the channel is then determined by measuring the time between the first and second step-wave signals out of the controlled step-wave signal generator.

  15. Orientation, Evaluation, and Integration of Part-Time Nursing Faculty.

    PubMed

    Carlson, Joanne S

    2015-07-10

    This study helps to quantify and describe orientation, evaluation, and integration practices pertaining to part-time clinical nursing faculty teaching in prelicensure nursing education programs. A researcher designed Web-based survey was used to collect information from a convenience sample of part-time clinical nursing faculty teaching in prelicensure nursing programs. Survey questions focused on the amount and type of orientation, evaluation, and integration practices. Descriptive statistics were used to analyze results. Respondents reported on average four hours of orientation, with close to half reporting no more than two hours. Evaluative feedback was received much more often from students than from full-time faculty. Most respondents reported receiving some degree of mentoring and that it was easy to get help from full-time faculty. Respondents reported being most informed about student evaluation procedures, grading, and the steps to take when students are not meeting course objectives, and less informed about changes to ongoing curriculum and policy.

  16. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Modifications to WRF's dynamical core to improve the treatment of moisture for large-eddy simulations: WRF DY-CORE MOISTURE TREATMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Heng; Endo, Satoshi; Wong, May

    Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in the acoustic sub­stepping portionmore » of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub­steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub­steps) are eliminated in both of the example stratocumulus cases. This modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  18. Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations

    DOE PAGES

    Xiao, Heng; Endo, Satoshi; Wong, May; ...

    2015-10-29

    Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  19. A framework for simultaneous aerodynamic design optimization in the presence of chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi

    Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less

  20. Dynamical Chaos in the Wisdom-Holman Integrator: Origins and Solutions

    NASA Technical Reports Server (NTRS)

    Rauch, Kevin P.; Holman, Matthew

    1999-01-01

    We examine the nonlinear stability of the Wisdom-Holman (WH) symplectic mapping applied to the integration of perturbed, highly eccentric (e-0.9) two-body orbits. We find that the method is unstable and introduces artificial chaos into the computed trajectories for this class of problems, unless the step size chosen 1s small enough that PeriaPse is always resolved, in which case the method is generically stable. This 'radial orbit instability' persists even for weakly perturbed systems. Using the Stark problem as a fiducial test case, we investigate the dynamical origin of this instability and argue that the numerical chaos results from the overlap of step-size resonances; interestingly, for the Stark-problem many of these resonances appear to be absolutely stable. We similarly examine the robustness of several alternative integration methods: a time-regularized version of the WH mapping suggested by Mikkola; the potential-splitting (PS) method of Duncan, Levison, Lee; and two original methods incorporating approximations based on Stark motion instead of Keplerian motion. The two fixed point problem and a related, more general problem are used to conduct a comparative test of the various methods for several types of motion. Among the algorithms tested, the time-transformed WH mapping is clearly the most efficient and stable method of integrating eccentric, nearly Keplerian orbits in the absence of close encounters. For test particles subject to both high eccentricities and very close encounters, we find an enhanced version of the PS method-incorporating time regularization, force-center switching, and an improved kernel function-to be both economical and highly versatile. We conclude that Stark-based methods are of marginal utility in N-body type integrations. Additional implications for the symplectic integration of N-body systems are discussed.

  1. Mitigation of narrowband interferences by means of a reconfigurable stepped frequency GPR system

    NASA Astrophysics Data System (ADS)

    Persico, Raffaele; Dei, Devis; Parrini, Filippo; Matera, Loredana

    2016-08-01

    This paper proposes a new technique for the mitigation of narrowband interferences by making use of an innovative stepped frequency Ground Penetrating Radar (GPR) system, based on the modulation of the integration time of the harmonic components of the signal. This can allow a good rejection of the interference signal without filtering out part of the band of the useful signal (which would involve a loss of information) and without increasing the power of the transmitted signal (which might saturate the receiver and make illegal the level of transmitted power). The price paid for this is an extension of the time needed in order to perform the measurements. We will show that this necessary drawback can be contained by making use of a prototypal reconfigurable stepped frequency GPR system.

  2. Development and verification of a real-time stochastic precipitation nowcasting system for urban hydrology in Belgium

    NASA Astrophysics Data System (ADS)

    Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.

    2016-01-01

    The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e., the observation that large-scale rainfall structures are more persistent and predictable than small-scale convective cells. This paper presents the development, adaptation and verification of the STEPS system for Belgium (STEPS-BE). STEPS-BE provides in real-time 20-member ensemble precipitation nowcasts at 1 km and 5 min resolutions up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 75-90 % of the forecast errors.

  3. Development and verification of a real-time stochastic precipitation nowcasting system for urban hydrology in Belgium

    NASA Astrophysics Data System (ADS)

    Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.

    2015-07-01

    The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e. the observation that large scale rainfall structures are more persistent and predictable than small scale convective cells. This paper presents the development, adaptation and verification of the system STEPS for Belgium (STEPS-BE). STEPS-BE provides in real-time 20 member ensemble precipitation nowcasts at 1 km and 5 min resolution up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 80-90 % of the forecast errors.

  4. Tribo-functionalizing Si and SU8 materials by surface modification for application in MEMS/NEMS actuator-based devices

    NASA Astrophysics Data System (ADS)

    Singh, R. A.; Satyanarayana, N.; Kustandi, T. S.; Sinha, S. K.

    2011-01-01

    Micro/nano-electro-mechanical-systems (MEMS/NEMS) are miniaturized devices built at micro/nanoscales. At these scales, the surface/interfacial forces are extremely strong and they adversely affect the smooth operation and the useful operating lifetimes of such devices. When these forces manifest in severe forms, they lead to material removal and thereby reduce the wear durability of the devices. In this paper, we present a simple, yet robust, two-step surface modification method to significantly enhance the tribological performance of MEMS/NEMS materials. The two-step method involves oxygen plasma treatment of polymeric films and the application of a nanolubricant, namely perfluoropolyether. We apply the two-step method to the two most important MEMS/NEMS structural materials, namely silicon and SU8 polymer. On applying surface modification to these materials, their initial coefficient of friction reduces by ~4-7 times and the steady-state coefficient of friction reduces by ~2.5-3.5 times. Simultaneously, the wear durability of both the materials increases by >1000 times. The two-step method is time effective as each of the steps takes the time duration of approximately 1 min. It is also cost effective as the oxygen plasma treatment is a part of the MEMS/NEMS fabrication process. The two-step method can be readily and easily integrated into MEMS/NEMS fabrication processes. It is anticipated that this method will work for any kind of structural material from which MEMS/NEMS are or can be made.

  5. Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests

    NASA Astrophysics Data System (ADS)

    Toth, G.; Keppens, R.; Botchev, M. A.

    1998-04-01

    We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.

  6. Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Perotti, Jose; Oostdyk, Rebecca; Brown, Barbara

    2010-01-01

    The purpose of this paper is to present the model development process used to create a Functional Fault Model (FFM) of a liquid hydrogen (L H2) system that will be used for realtime fault isolation in a Fault Detection, Isolation and Recover (FDIR) system. The paper explains th e steps in the model development process and the data products required at each step, including examples of how the steps were performed fo r the LH2 system. It also shows the relationship between the FDIR req uirements and steps in the model development process. The paper concl udes with a description of a demonstration of the LH2 model developed using the process and future steps for integrating the model in a live operational environment.

  7. Visual and kinesthetic locomotor imagery training integrated with auditory step rhythm for walking performance of patients with chronic stroke.

    PubMed

    Kim, Jin-Seop; Oh, Duck-Won; Kim, Suhn-Yeop; Choi, Jong-Duk

    2011-02-01

    To compare the effect of visual and kinesthetic locomotor imagery training on walking performance and to determine the clinical feasibility of incorporating auditory step rhythm into the training. Randomized crossover trial. Laboratory of a Department of Physical Therapy. Fifteen subjects with post-stroke hemiparesis. Four locomotor imagery trainings on walking performance: visual locomotor imagery training, kinesthetic locomotor imagery training, visual locomotor imagery training with auditory step rhythm and kinesthetic locomotor imagery training with auditory step rhythm. The timed up-and-go test and electromyographic and kinematic analyses of the affected lower limb during one gait cycle. After the interventions, significant differences were found in the timed up-and-go test results between the visual locomotor imagery training (25.69 ± 16.16 to 23.97 ± 14.30) and the kinesthetic locomotor imagery training with auditory step rhythm (22.68 ± 12.35 to 15.77 ± 8.58) (P < 0.05). During the swing and stance phases, the kinesthetic locomotor imagery training exhibited significantly increased activation in a greater number of muscles and increased angular displacement of the knee and ankle joints compared with the visual locomotor imagery training, and these effects were more prominent when auditory step rhythm was integrated into each form of locomotor imagery training. The activation of the hamstring during the swing phase and the gastrocnemius during the stance phase, as well as kinematic data of the knee joint, were significantly different for posttest values between the visual locomotor imagery training and the kinesthetic locomotor imagery training with auditory step rhythm (P < 0.05). The therapeutic effect may be further enhanced in the kinesthetic locomotor imagery training than in the visual locomotor imagery training. The auditory step rhythm together with the locomotor imagery training produces a greater positive effect in improving the walking performance of patients with post-stroke hemiparesis.

  8. Advanced real-time multi-display educational system (ARMES): An innovative real-time audiovisual mentoring tool for complex robotic surgery.

    PubMed

    Lee, Joong Ho; Tanaka, Eiji; Woo, Yanghee; Ali, Güner; Son, Taeil; Kim, Hyoung-Il; Hyung, Woo Jin

    2017-12-01

    The recent scientific and technologic advances have profoundly affected the training of surgeons worldwide. We describe a novel intraoperative real-time training module, the Advanced Robotic Multi-display Educational System (ARMES). We created a real-time training module, which can provide a standardized step by step guidance to robotic distal subtotal gastrectomy with D2 lymphadenectomy procedures, ARMES. The short video clips of 20 key steps in the standardized procedure for robotic gastrectomy were created and integrated with TilePro™ software to delivery on da Vinci Surgical Systems (Intuitive Surgical, Sunnyvale, CA). We successfully performed the robotic distal subtotal gastrectomy with D2 lymphadenectomy for patient with gastric cancer employing this new teaching method without any transfer errors or system failures. Using this technique, the total operative time was 197 min and blood loss was 50 mL and there were no intra- or post-operative complications. Our innovative real-time mentoring module, ARMES, enables standardized, systematic guidance during surgical procedures. © 2017 Wiley Periodicals, Inc.

  9. The Peroxide Pathway

    NASA Technical Reports Server (NTRS)

    McNeal, Curtis I., Jr.; Anderson, William

    1999-01-01

    NASA's current focus on technology roadmaps as a tool for guiding investment decisions leads naturally to a discussion of NASA's roadmap for peroxide propulsion system development. NASA's new Second Generation Space Transportation System roadmap calls for an integrated Reusable Upper-Stage (RUS) engine technology demonstration in the FY03/FY04 time period. Preceding this integrated demonstration are several years of component developments and subsystem technology demonstrations. NASA and the Air Force took the first steps at developing focused upper stage technologies with the initiation of the Upper Stage Flight Experiment with Orbital Sciences in December 1997. A review of this program's peroxide propulsion development is a useful first step in establishing the peroxide propulsion pathway that could lead to a RUS demonstration in 2004.

  10. Time Dependent Studies of Reactive Shocks in the Gas Phase

    DTIC Science & Technology

    1978-11-16

    which takes advantsge of time-stop splitting. The fluid dynamics time integration is performed by an explicit two step predictor - corrector technique...Nava Reearh l~oraoryARIA A WORK UNIT NUMBERS NasahRaington MC, raor 2037 NR Problem (1101-16Washngto, !) C , 2i176ONR Project RR024.02.41 Office of... self -consistently on their own characteristic time-scaies using the flux-corrected transport and selected asymptotic meothods, respectively. Results are

  11. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  12. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1983-01-01

    The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  13. Numerical integration of KPZ equation with restrictions

    NASA Astrophysics Data System (ADS)

    Torres, M. F.; Buceta, R. C.

    2018-03-01

    In this paper, we introduce a novel integration method of Kardar–Parisi–Zhang (KPZ) equation. It is known that if during the discrete integration of the KPZ equation the nearest-neighbor height-difference exceeds a critical value, instabilities appear and the integration diverges. One way to avoid these instabilities is to replace the KPZ nonlinear-term by a function of the same term that depends on a single adjustable parameter which is able to control pillars or grooves growing on the interface. Here, we propose a different integration method which consists of directly limiting the value taken by the KPZ nonlinearity, thereby imposing a restriction rule that is applied in each integration time-step, as if it were the growth rule of a restricted discrete model, e.g. restricted-solid-on-solid (RSOS). Taking the discrete KPZ equation with restrictions to its dimensionless version, the integration depends on three parameters: the coupling constant g, the inverse of the time-step k, and the restriction constant ε which is chosen to eliminate divergences while keeping all the properties of the continuous KPZ equation. We study in detail the conditions in the parameters’ space that avoid divergences in the 1-dimensional integration and reproduce the scaling properties of the continuous KPZ with a particular parameter set. We apply the tested methodology to the d-dimensional case (d = 3, 4 ) with the purpose of obtaining the growth exponent β, by establishing the conditions of the coupling constant g under which we recover known values reached by other authors, particularly for the RSOS model. This method allows us to infer that d  =  4 is not the critical dimension of the KPZ universality class, where the strong-coupling phase disappears.

  14. Climate Framework for Uncertainty, Negotiation, and Distribution (FUND)

    EPA Science Inventory

    FUND is an Integrated Assessment model that links socioeconomic, technology, and emission scenarios with atmospheric chemistry, climate dynamics, sea level rise, and the resulting economic impacts. The model runs in time-steps of one year from 1950 to 2300, and distinguishes 16 m...

  15. Real-time control of walking using recordings from dorsal root ganglia

    NASA Astrophysics Data System (ADS)

    Holinski, B. J.; Everaert, D. G.; Mushahwar, V. K.; Stein, R. B.

    2013-10-01

    Objective. The goal of this study was to decode sensory information from the dorsal root ganglia (DRG) in real time, and to use this information to adapt the control of unilateral stepping with a state-based control algorithm consisting of both feed-forward and feedback components. Approach. In five anesthetized cats, hind limb stepping on a walkway or treadmill was produced by patterned electrical stimulation of the spinal cord through implanted microwire arrays, while neuronal activity was recorded from the DRG. Different parameters, including distance and tilt of the vector between hip and limb endpoint, integrated gyroscope and ground reaction force were modelled from recorded neural firing rates. These models were then used for closed-loop feedback. Main results. Overall, firing-rate-based predictions of kinematic sensors (limb endpoint, integrated gyroscope) were the most accurate with variance accounted for >60% on average. Force prediction had the lowest prediction accuracy (48 ± 13%) but produced the greatest percentage of successful rule activations (96.3%) for stepping under closed-loop feedback control. The prediction of all sensor modalities degraded over time, with the exception of tilt. Significance. Sensory feedback from moving limbs would be a desirable component of any neuroprosthetic device designed to restore walking in people after a spinal cord injury. This study provides a proof-of-principle that real-time feedback from the DRG is possible and could form part of a fully implantable neuroprosthetic device with further development.

  16. Real-time control of walking using recordings from dorsal root ganglia

    PubMed Central

    Holinski, B J; Everaert, D G; Mushahwar, V K; Stein, R B

    2013-01-01

    Objective The goal of this study was to decode sensory information from the dorsal root ganglia (DRG) in real time, and to use this information to adapt the control of unilateral stepping with a state-based control algorithm consisting of both feed-forward and feedback components. Approach In five anesthetized cats, hind limb stepping on a walkway or treadmill was produced by patterned electrical stimulation of the spinal cord through implanted microwire arrays, while neuronal activity was recorded from the dorsal root ganglia. Different parameters, including distance and tilt of the vector between hip and limb endpoint, integrated gyroscope and ground reaction force were modeled from recorded neural firing rates. These models were then used for closed-loop feedback. Main Results Overall, firing-rate based predictions of kinematic sensors (limb endpoint, integrated gyroscope) were the most accurate with variance accounted for >60% on average. Force prediction had the lowest prediction accuracy (48±13%) but produced the greatest percentage of successful rule activations (96.3%) for stepping under closed-loop feedback control. The prediction of all sensor modalities degraded over time, with the exception of tilt. Significance Sensory feedback from moving limbs would be a desirable component of any neuroprosthetic device designed to restore walking in people after a spinal cord injury. This study provides a proof-of-principle that real-time feedback from the DRG is possible and could form part of a fully implantable neuroprosthetic device with further development. PMID:23928579

  17. A computational method for sharp interface advection.

    PubMed

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  18. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm

    PubMed Central

    Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît

    2016-01-01

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  19. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time

    PubMed Central

    Lu, Yuhua; Liu, Qian

    2018-01-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications. PMID:29515870

  20. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time.

    PubMed

    Xu, Lang; Lu, Yuhua; Liu, Qian

    2018-02-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications.

  1. Case study: Lockheed-Georgia Company integrated design process

    NASA Technical Reports Server (NTRS)

    Waldrop, C. T.

    1980-01-01

    A case study of the development of an Integrated Design Process is presented. The approach taken in preparing for the development of an integrated design process includes some of the IPAD approaches such as developing a Design Process Model, cataloging Technical Program Elements (TPE's), and examining data characteristics and interfaces between contiguous TPE's. The implementation plan is based on an incremental development of capabilities over a period of time with each step directed toward, and consistent with, the final architecture of a total integrated system. Because of time schedules and different computer hardware, this system will not be the same as the final IPAD release; however, many IPAD concepts will no doubt prove applicable as the best approach. Full advantage will be taken of the IPAD development experience. A scenario that could be typical for many companies, even outside the aerospace industry, in developing an integrated design process for an IPAD-type environment is represented.

  2. Two-Dimensional Modelling of the Hall Thruster Discharge: Final Report

    DTIC Science & Technology

    2007-09-10

    performing a number Nprob,jk of probability tests to determine the real number of macroions to be created, Njk, in a particular cell and time step. The...hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...temperature-dependent yield expression is proposed, which avoids integrals expressions at the same time that it recovers approximately the reduction of that

  3. Modeling of the motion of automobile elastic wheel in real-time for creation of wheeled vehicles motion control electronic systems

    NASA Astrophysics Data System (ADS)

    Balakina, E. V.; Zotov, N. M.; Fedin, A. P.

    2018-02-01

    Modeling of the motion of the elastic wheel of the vehicle in real-time is used in the tasks of constructing different models in the creation of wheeled vehicles motion control electronic systems, in the creation of automobile stand-simulators etc. The accuracy and the reliability of simulation of the parameters of the wheel motion in real-time when rolling with a slip within the given road conditions are determined not only by the choice of the model, but also by the inaccuracy and instability of the numerical calculation. It is established that the inaccuracy and instability of the calculation depend on the size of the step of integration and the numerical method being used. The analysis of these inaccuracy and instability when wheel rolling with a slip was made and recommendations for reducing them were developed. It is established that the total allowable range of steps of integration is 0.001.0.005 s; the strongest instability is manifested in the calculation of the angular and linear accelerations of the wheel; the weakest instability is manifested in the calculation of the translational velocity of the wheel and moving of the center of the wheel; the instability is less at large values of slip angle and on more slippery surfaces. A new method of the average acceleration is suggested, which allows to significantly reduce (up to 100%) the manifesting of instability of the solution in the calculation of all parameters of motion of the elastic wheel for different braking conditions and for the entire range of steps of integration. The results of research can be applied to the selection of control algorithms in vehicles motion control electronic systems and in the testing stand-simulators

  4. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  5. Optical integrator for optical dark-soliton detection and pulse shaping.

    PubMed

    Ngo, Nam Quoc

    2006-09-10

    The design and analysis of an Nth-order optical integrator using the digital filter technique is presented. The optical integrator is synthesized using planar-waveguide technology. It is shown that a first-order optical integrator can be used as an optical dark-soliton detector by converting an optical dark-soliton pulse into an optical bell-shaped pulse for ease of detection. The optical integrators can generate an optical step function, staircase function, and paraboliclike functions from input optical Gaussian pulses. The optical integrators may be potentially used as basic building blocks of all-optical signal processing systems because the time integrals of signals may sometimes be required for further use or analysis. Furthermore, an optical integrator may be used for the shaping of optical pulses or in an optical feedback control system.

  6. Sensitivity of indentation testing to step-off edges and interface integrity in cartilage repair.

    PubMed

    Bae, Won C; Law, Amanda W; Amiel, David; Sah, Robert L

    2004-03-01

    Step-off edges and tissue interfaces are prevalent in cartilage injury such as after intra-articular fracture and reduction, and in focal defects and surgical repair procedures such as osteochondral graft implantation. It would be useful to assess the function of injured or donor tissues near such step-off edges and the extent of integration at material interfaces. The objective of this study was to determine if indentation testing is sensitive to the presence of step-off edges and the integrity of material interfaces, in both in vitro simulated repair samples of bovine cartilage defect filled with fibrin matrix, and in vivo biological repair samples from a goat animal model. Indentation stiffness decreased at locations approaching a step-off edge, a lacerated interface, or an integrated interface in which the distal tissue was relatively soft. The indentation stiffness increased or remained constant when the site of indentation approached an integrated interface in which the distal tissue was relatively stiff or similar in stiffness to the tissue being tested. These results indicate that indentation testing is sensitive to step-off edges and interface integrity, and may be useful for assessing cartilage injury and for following the progression of tissue integration after surgical treatments.

  7. Implementing successful strategic plans: a simple formula.

    PubMed

    Blondeau, Whitney; Blondeau, Benoit

    2015-01-01

    Strategic planning is a process. One way to think of strategic planning is to envision its development and design as a framework that will help your hospital navigate through internal and external changing environments over time. Although the process of strategic planning can feel daunting, following a simple formula involving five steps using the mnemonic B.E.G.I.N. (Begin, Evaluate, Goals & Objectives, Integration, and Next steps) will help the planning process feel more manageable, and lead you to greater success.

  8. Seakeeping with the semi-Lagrangian particle finite element method

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio

    2017-07-01

    The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.

  9. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  10. A highly manufacturable 0.2 {mu}m AlGaAs/InGaAs PHEMT fabricated using the single-layer integrated-metal FET (SLIMFET) process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Havasy, C.K.; Quach, T.K.; Bozada, C.A.

    1995-12-31

    This work is the development of a single-layer integrated-metal field effect transistor (SLIMFET) process for a high performance 0.2 {mu}m AlGaAs/InGaAs pseudomorphic high electron mobility transistor (PHEMT). This process is compatible with MMIC fabrication and minimizes process variations, cycle time, and cost. This process uses non-alloyed ohmic contacts, a selective gate-recess etching process, and a single gate/source/drain metal deposition step to form both Schottky and ohmic contacts at the same time.

  11. Continuous track paths reveal additive evidence integration in multistep decision making.

    PubMed

    Buc Calderon, Cristian; Dewulf, Myrtille; Gevers, Wim; Verguts, Tom

    2017-10-03

    Multistep decision making pervades daily life, but its underlying mechanisms remain obscure. We distinguish four prominent models of multistep decision making, namely serial stage, hierarchical evidence integration, hierarchical leaky competing accumulation (HLCA), and probabilistic evidence integration (PEI). To empirically disentangle these models, we design a two-step reward-based decision paradigm and implement it in a reaching task experiment. In a first step, participants choose between two potential upcoming choices, each associated with two rewards. In a second step, participants choose between the two rewards selected in the first step. Strikingly, as predicted by the HLCA and PEI models, the first-step decision dynamics were initially biased toward the choice representing the highest sum/mean before being redirected toward the choice representing the maximal reward (i.e., initial dip). Only HLCA and PEI predicted this initial dip, suggesting that first-step decision dynamics depend on additive integration of competing second-step choices. Our data suggest that potential future outcomes are progressively unraveled during multistep decision making.

  12. Accuracy of an unstructured-grid upwind-Euler algorithm for the ONERA M6 wing

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1991-01-01

    Improved algorithms for the solution of the three-dimensional, time-dependent Euler equations are presented for aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured-grid flow solvers. The spatial discretization involves a flux-split approach that is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves either an explicit time-integration scheme using a multistage Runge-Kutta procedure or an implicit time-integration scheme using a Gauss-Seidel relaxation procedure, which is computationally efficient for either steady or unsteady flow problems. With the implicit Gauss-Seidel procedure, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady flow results are presented for both the NACA 0012 airfoil and the Office National d'Etudes et de Recherches Aerospatiales M6 wing to demonstrate applications of the new Euler solvers. The paper presents a description of the Euler solvers along with results and comparisons that assess the capability.

  13. Rain volume estimation over areas using satellite and radar data

    NASA Technical Reports Server (NTRS)

    Doneaud, A. A.; Vonderhaar, T. H.

    1985-01-01

    The feasibility of rain volume estimation over fixed and floating areas was investigated using rapid scan satellite data following a technique recently developed with radar data, called the Area Time Integral (ATI) technique. The radar and rapid scan GOES satellite data were collected during the Cooperative Convective Precipitation Experiment (CCOPE) and North Dakota Cloud Modification Project (NDCMP). Six multicell clusters and cells were analyzed to the present time. A two-cycle oscillation emphasizing the multicell character of the clusters is demonstrated. Three clusters were selected on each day, 12 June and 2 July. The 12 June clusters occurred during the daytime, while the 2 July clusters during the nighttime. A total of 86 time steps of radar and 79 time steps of satellite images were analyzed. There were approximately 12-min time intervals between radar scans on the average.

  14. Three-dimensional unstructured grid Euler computations using a fully-implicit, upwind method

    NASA Technical Reports Server (NTRS)

    Whitaker, David L.

    1993-01-01

    A method has been developed to solve the Euler equations on a three-dimensional unstructured grid composed of tetrahedra. The method uses an upwind flow solver with a linearized, backward-Euler time integration scheme. Each time step results in a sparse linear system of equations which is solved by an iterative, sparse matrix solver. Local-time stepping, switched evolution relaxation (SER), preconditioning and reuse of the Jacobian are employed to accelerate the convergence rate. Implicit boundary conditions were found to be extremely important for fast convergence. Numerical experiments have shown that convergence rates comparable to that of a multigrid, central-difference scheme are achievable on the same mesh. Results are presented for several grids about an ONERA M6 wing.

  15. Evidence-based practice, step by step: critical appraisal of the evidence: part II: digging deeper--examining the "keeper" studies.

    PubMed

    Fineout-Overholt, Ellen; Melnyk, Bernadette Mazurek; Stillwell, Susan B; Williamson, Kathleen M

    2010-09-01

    This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we've scheduled "Chat with the Authors" calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be published with November's Evidence-Based Practice, Step by Step.

  16. Space-time domain solutions of the wave equation by a non-singular boundary integral method and Fourier transform.

    PubMed

    Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C

    2017-08-01

    The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.

  17. Motofit - integrating neutron reflectometry acquisition, reduction and analysis into one, easy to use, package

    NASA Astrophysics Data System (ADS)

    Nelson, Andrew

    2010-11-01

    The efficient use of complex neutron scattering instruments is often hindered by the complex nature of their operating software. This complexity exists at each experimental step: data acquisition, reduction and analysis, with each step being as important as the previous. For example, whilst command line interfaces are powerful at automated acquisition they often reduce accessibility by novice users and sometimes reduce the efficiency for advanced users. One solution to this is the development of a graphical user interface which allows the user to operate the instrument by a simple and intuitive "push button" approach. This approach was taken by the Motofit software package for analysis of multiple contrast reflectometry data. Here we describe the extension of this package to cover the data acquisition and reduction steps for the Platypus time-of-flight neutron reflectometer. Consequently, the complete operation of an instrument is integrated into a single, easy to use, program, leading to efficient instrument usage.

  18. Stability and delay sensitivity of neutral fractional-delay systems.

    PubMed

    Xu, Qi; Shi, Min; Wang, Zaihua

    2016-08-01

    This paper generalizes the stability test method via integral estimation for integer-order neutral time-delay systems to neutral fractional-delay systems. The key step in stability test is the calculation of the number of unstable characteristic roots that is described by a definite integral over an interval from zero to a sufficient large upper limit. Algorithms for correctly estimating the upper limits of the integral are given in two concise ways, parameter dependent or independent. A special feature of the proposed method is that it judges the stability of fractional-delay systems simply by using rough integral estimation. Meanwhile, the paper shows that for some neutral fractional-delay systems, the stability is extremely sensitive to the change of time delays. Examples are given for demonstrating the proposed method as well as the delay sensitivity.

  19. ICASE Semiannual Report. April 1, 1993 through September 30, 1993

    DTIC Science & Technology

    1993-12-01

    scientists from universities and industry who have resident appointments for limited periods of time as well as by visiting and resident consultants... time integration. One of these is the time advancement of systems of hyperbolic partial differential equations via high order Runge- Kutta algorithms...Typically if the R-K methods is of, say, fourth order accuracy then there will be four intermediate steps between time level t = n6 and t + 6 = (n + 1)b

  20. Introductory guide to integrated ecological framework.

    DOT National Transportation Integrated Search

    2014-10-01

    This guide introduces the Integrated Ecological Framework (IEF) to Texas Department of Transportation : (TxDOT) engineers and planners. IEF is step-by-step approach to integrating ecological and : transportation planning with the goal of avoiding imp...

  1. Multiple time step integrators in ab initio molecular dynamics.

    PubMed

    Luehr, Nathan; Markland, Thomas E; Martínez, Todd J

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  2. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  3. Why Not Wait? Eight Institutions Share Their Experiences Moving United States Medical Licensing Examination Step 1 After Core Clinical Clerkships.

    PubMed

    Daniel, Michelle; Fleming, Amy; Grochowski, Colleen O'Conner; Harnik, Vicky; Klimstra, Sibel; Morrison, Gail; Pock, Arnyce; Schwartz, Michael L; Santen, Sally

    2017-11-01

    The majority of medical students complete the United States Medical Licensing Examination Step 1 after their foundational sciences; however, there are compelling reasons to examine this practice. This article provides the perspectives of eight MD-granting medical schools that have moved Step 1 after the core clerkships, describing their rationale, logistics of the change, outcomes, and lessons learned. The primary reasons these institutions cite for moving Step 1 after clerkships are to foster more enduring and integrated basic science learning connected to clinical care and to better prepare students for the increasingly clinical focus of Step 1. Each school provides key features of the preclerkship and clinical curricula and details concerning taking Steps 1 and 2, to allow other schools contemplating change to understand the landscape. Most schools report an increase in aggregate Step 1 scores after the change. Despite early positive outcomes, there may be unintended consequences to later scheduling of Step 1, including relatively late student reevaluations of their career choice if Step 1 scores are not competitive in the specialty area of their choice. The score increases should be interpreted with caution: These schools may not be representative with regard to mean Step 1 scores and failure rates. Other aspects of curricular transformation and rising national Step 1 scores confound the data. Although the optimal timing of Step 1 has yet to be determined, this article summarizes the perspectives of eight schools that changed Step 1 timing, filling a gap in the literature on this important topic.

  4. Defense Intelligence: Additional Steps Could Better Integrate Intelligence Input into DODs Acquisition of Major Weapon Systems

    DTIC Science & Technology

    2016-11-01

    DEFENSE INTELLIGENCE Additional Steps Could Better Integrate Intelligence Input into DOD’s Acquisition of Major Weapon...States Government Accountability Office Highlights of GAO-17-10, a report to congressional committees November 2016 DEFENSE INTELLIGENCE ...Additional Steps Could Better Integrate Intelligence Input into DOD’s Acquisition of Major Weapon Systems What GAO Found The Department of Defense (DOD

  5. N-body simulations of star clusters

    NASA Astrophysics Data System (ADS)

    Engle, Kimberly Anne

    1999-10-01

    We investigate the structure and evolution of underfilling (i.e. non-Roche-lobe-filling) King model globular star clusters using N-body simulations. We model clusters with various underfilling factors and mass distributions to determine their evolutionary tracks and lifetimes. These models include a self-consistent galactic tidal field, mass loss due to stellar evolution, ejection, and evaporation, and binary evolution. We find that a star cluster that initially does not fill its Roche lobe can live many times longer than one that does initially fill its Roche lobe. After a few relaxation times, the cluster expands to fill its Roche lobe. We also find that the choice of initial mass function significantly affects the lifetime of the cluster. These simulations were performed on the GRAPE-4 (GRAvity PipE) special-purpose hardware with the stellar dynamics package ``Starlab.'' The GRAPE-4 system is a massively-parallel computer designed to calculate the force (and its first time derivative) due to N particles. Starlab's integrator ``kira'' employs a 4th- order Hermite scheme with hierarchical (block) time steps to evolve the stellar system. We discuss, in some detail, the design of the GRAPE-4 system and the manner in which the Hermite integration scheme with block time steps is implemented in the hardware.

  6. An integrated gait rehabilitation training based on Functional Electrical Stimulation cycling and overground robotic exoskeleton in complete spinal cord injury patients: Preliminary results.

    PubMed

    Mazzoleni, S; Battini, E; Rustici, A; Stampacchia, G

    2017-07-01

    The aim of this study is to investigate the effects of an integrated gait rehabilitation training based on Functional Electrical Stimulation (FES)-cycling and overground robotic exoskeleton in a group of seven complete spinal cord injury patients on spasticity and patient-robot interaction. They underwent a robot-assisted rehabilitation training based on two phases: n=20 sessions of FES-cycling followed by n= 20 sessions of robot-assisted gait training based on an overground robotic exoskeleton. The following clinical outcome measures were used: Modified Ashworth Scale (MAS), Numerical Rating Scale (NRS) on spasticity, Penn Spasm Frequency Scale (PSFS), Spinal Cord Independence Measure Scale (SCIM), NRS on pain and International Spinal Cord Injury Pain Data Set (ISCI). Clinical outcome measures were assessed before (T0) after (T1) the FES-cycling training and after (T2) the powered overground gait training. The ability to walk when using exoskeleton was assessed by means of 10 Meter Walk Test (10MWT), 6 Minute Walk Test (6MWT), Timed Up and Go test (TUG), standing time, walking time and number of steps. Statistically significant changes were found on the MAS score, NRS-spasticity, 6MWT, TUG, standing time and number of steps. The preliminary results of this study show that an integrated gait rehabilitation training based on FES-cycling and overground robotic exoskeleton in complete SCI patients can provide a significant reduction of spasticity and improvements in terms of patient-robot interaction.

  7. Four-dimensional Microscope-Integrated Optical Coherence Tomography to Visualize Suture Depth in Strabismus Surgery.

    PubMed

    Pasricha, Neel D; Bhullar, Paramjit K; Shieh, Christine; Carrasco-Zevallos, Oscar M; Keller, Brenton; Izatt, Joseph A; Toth, Cynthia A; Freedman, Sharon F; Kuo, Anthony N

    2017-02-14

    The authors report the use of swept-source microscope-integrated optical coherence tomography (SS-MIOCT), capable of live four-dimensional (three-dimensional across time) intraoperative imaging, to directly visualize suture depth during lateral rectus resection. Key surgical steps visualized in this report included needle depth during partial and full-thickness muscle passes along with scleral passes. [J Pediatr Ophthalmol Strabismus. 2017;54:e1-e5.]. Copyright 2017, SLACK Incorporated.

  8. From mess to mass: a methodology for calculating storm event pollutant loads with their uncertainties, from continuous raw data time series.

    PubMed

    Métadier, M; Bertrand-Krajewski, J-L

    2011-01-01

    With the increasing implementation of continuous monitoring of both discharge and water quality in sewer systems, large data bases are now available. In order to manage large amounts of data and calculate various variables and indicators of interest it is necessary to apply automated methods for data processing. This paper deals with the processing of short time step turbidity time series to estimate TSS (Total Suspended Solids) and COD (Chemical Oxygen Demand) event loads in sewer systems during storm events and their associated uncertainties. The following steps are described: (i) sensor calibration, (ii) estimation of data uncertainties, (iii) correction of raw data, (iv) data pre-validation tests, (v) final validation, and (vi) calculation of TSS and COD event loads and estimation of their uncertainties. These steps have been implemented in an integrated software tool. Examples of results are given for a set of 33 storm events monitored in a stormwater separate sewer system.

  9. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  10. Integrating Thermodynamic Models in Geodynamic Simulations: The Example of the Community Software ASPECT

    NASA Astrophysics Data System (ADS)

    Dannberg, J.; Heister, T.; Grove, R. R.; Gassmoeller, R.; Spiegelman, M. W.; Bangerth, W.

    2017-12-01

    Earth's surface shows many features whose genesis can only be understood through the interplay of geodynamic and thermodynamic models. This is particularly important in the context of melt generation and transport: Mantle convection determines the distribution of temperature and chemical composition, the melting process itself is then controlled by the thermodynamic relations and in turn influences the properties and the transport of melt. Here, we present our extension of the community geodynamics code ASPECT, which solves the equations of coupled magma/mantle dynamics, and allows to integrate different parametrizations of reactions and phase transitions: They may alternatively be implemented as simple analytical expressions, look-up tables, or computed by a thermodynamics software. As ASPECT uses a variety of numerical methods and solvers, this also gives us the opportunity to compare different approaches of modelling the melting process. In particular, we will elaborate on the spatial and temporal resolution that is required to accurately model phase transitions, and show the potential of adaptive mesh refinement when applied to melt generation and transport. We will assess the advantages and disadvantages of iterating between fluid dynamics and chemical reactions derived from thermodynamic models within each time step, or decoupling them, allowing for different time step sizes. Beyond that, we will expand on the functionality required for an interface between computational thermodynamics and fluid dynamics models from the geodynamics side. Finally, using a simple example of melting of a two-phase, two-component system, we compare different time-stepping and solver schemes in terms of accuracy and efficiency, in dependence of the time scales of fluid flow and chemical reactions relative to each other. Our software provides a framework to integrate thermodynamic models in high resolution, 3d simulations of coupled magma/mantle dynamics, and can be used as a tool to study links between physical processes and geochemical signals in the Earth.

  11. LAPS Lidar Measurements at the ARM Alaska Northslope Site (Support to FIRE Project)

    NASA Technical Reports Server (NTRS)

    Philbrick, C. Russell; Lysak, Daniel B., Jr.; Petach, Tomas M.; Esposito, Steven T.; Mulik, Karoline R.

    1998-01-01

    This report consists of data summaries of the results obtained during the May 1998 measurement period at Barrow Alaska. This report does not contain any data interpretation or analysis of the results which will follow this activity. This report is forwarded with a data set on magnetic media which contains the reduced data from the LAPS lidar in 15 minute intervals. The data was obtained during the period 15-30 May 1998. The measurement period overlapped with several aircraft flights conducted by NASA as part of the FIRE project. The report contains a summary list of the data obtained plus figures that have been prepared to help visualize the measurement periods. The order of the presentation is as follows: Section 1. A copy of the Statement of Work for the planned activity of the second measurement period at the ARM Northslope site is provided. Section 2. A list of the data collection periods shows the number of one minute data records stored during each hour of operation and the corresponding size (Mbytes) of the one hour data folders. The folder and file names are composed from the year, month, day, hour and minute. The date/time information is given in UTC for easier comparison with other data sets. Section 3. A set of 4 comparisons between the LAPS lidar results and the sondes released by the ARM scientists from a location nearby the lidar. The lidar results show the +/- 1 sigma statistical error on each of the independent 75 m altitude bins of the data. This set of 4 comparisons was used to set and validate the calibration value which was then used for the complete data set. Section 4. A set of false color figures with up to 10 hours of specific humidity measurements are shown in each graph. Two days of measurements are shown on each page. These plots are crude representations of the data and permit a survey which indicates when the clouds were very low or where interesting events may occur in the results. These plots are prepared using the real time sequence plot program which has no smoothing in either the altitude or time (except that you are allowed to pick the integration time and time step. All of these plots were prepared with 15 minute integration and 5 minute time step. Section 5. A set of time sequence data for all of the extended observation periods are shown with a smoothing algorithm from the Matlab plotting library. Most of these data are integrated for 5 minutes and stepped at I minute intervals but several plots are shown with both 15 minute integration and 5 minute steps. The upper level on these data was selected and converted to the white background where the error in the specific humidity reached 25%. Section 6. The set of one hour integrated plots shown with up to 4 hours per page are provided- from the real time analysis snapshot program. The only difference in these plots and the real time display is that the plots are stopped at an altitude where the error appears to be too large for the data to contain any meaningful information.

  12. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  13. Integration of biotechnological wastewater treatment units in textile finishing factories: from end of the pipe solutions to combined production and wastewater treatment units.

    PubMed

    Feitkenhauer, H; Meyer, U

    2001-08-23

    Increasing costs for water, wastewater and energy put pressure on textile finishing plants to increase the efficiency of wet processing. An improved water management can decrease the use of these resources and is a prerequisite for the integration of an efficient, anaerobic on-site pretreatment of effluents that will further cut wastewater costs. A two-phase anaerobic treatment is proposed, and successful laboratory experiments with model effluents from the cotton finishing industry are reported. The chemical oxygen demand of this wastewater was reduced by over 88% at retention times of 1 day or longer. The next step to boost the efficiency is to combine the production and wastewater treatment. The example of cotton fabric desizing (removing size from the fabric) illustrates how this final step of integration uses the acidic phase bioreactor as a part of the production and allows to close the water cycle of the system.

  14. Oceanic and atmospheric conditions associated with the pentad rainfall over the southeastern peninsular India during the North-East Indian Monsoon season

    NASA Astrophysics Data System (ADS)

    Shanmugasundaram, Jothiganesh; Lee, Eungul

    2018-03-01

    The association of North-East Indian Monsoon rainfall (NEIMR) over the southeastern peninsular India with the oceanic and atmospheric conditions over the adjacent ocean regions at pentad time step (five days period) was investigated during the months of October to December for the period 1985-2014. The non-parametric correlation and composite analyses were carried out for the simultaneous and lagged time steps (up to four lags) of oceanic and atmospheric variables with pentad NEIMR. The results indicated that NEIMR was significantly correlated: 1) positively with both sea surface temperature (SST) led by 1-4 pentads (lag 1-4 time steps) and latent heat flux (LHF) during the simultaneous, lag 1 and 2 time steps over the equatorial western Indian Ocean, 2) positively with SST but negatively with LHF (less heat flux from ocean to atmosphere) during the same and all the lagged time steps over the Bay of Bengal. Consistently, during the wet NEIMR pentads over the southeastern peninsular India, SST significantly increased over the Bay of Bengal during all the time steps and the equatorial western Indian Ocean during the lag 2-4 time steps, while the LHF decreased over the Bay of Bengal (all time steps) and increased over the Indian Ocean (same, lag 1 and 2). The investigation on ocean-atmospheric interaction revealed that the enhanced LHF over the equatorial western Indian Ocean was related to increased atmospheric moisture demand and increased wind speed, whereas the reduced LHF over the Bay of Bengal was associated with decreased atmospheric moisture demand and decreased wind speed. The vertically integrated moisture flux and moisture transport vectors from 1000 to 850 hPa exhibited that the moisture was carried away from the equatorial western Indian Ocean to the strong moisture convergence regions of the Bay of Bengal during the same and lag 1 time steps of wet NEIMR pentads. Further, the moisture over the Bay of Bengal was transported to the southeastern peninsular India through stronger cyclonic circulations, which were confirmed by the moisture transport vectors and positive vorticity. The identified ocean and atmosphere processes, associated with the wet NEIMR conditions, could be a valuable scientific input for enhancing the rainfall predictability, which has a huge socioeconomic value to agriculture and water resource management sectors in the southeastern peninsular India.

  15. Nonlinearly preconditioned semismooth Newton methods for variational inequality solution of two-phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Yang, Haijian; Sun, Shuyu; Yang, Chao

    2017-03-01

    Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.

  16. A fully disposable and integrated paper-based device for nucleic acid extraction, amplification and detection.

    PubMed

    Tang, Ruihua; Yang, Hui; Gong, Yan; You, MinLi; Liu, Zhi; Choi, Jane Ru; Wen, Ting; Qu, Zhiguo; Mei, Qibing; Xu, Feng

    2017-03-29

    Nucleic acid testing (NAT) has been widely used for disease diagnosis, food safety control and environmental monitoring. At present, NAT mainly involves nucleic acid extraction, amplification and detection steps that heavily rely on large equipment and skilled workers, making the test expensive, time-consuming, and thus less suitable for point-of-care (POC) applications. With advances in paper-based microfluidic technologies, various integrated paper-based devices have recently been developed for NAT, which however require off-chip reagent storage, complex operation steps and equipment-dependent nucleic acid amplification, restricting their use for POC testing. To overcome these challenges, we demonstrate a fully disposable and integrated paper-based sample-in-answer-out device for NAT by integrating nucleic acid extraction, helicase-dependent isothermal amplification and lateral flow assay detection into one paper device. This simple device allows on-chip dried reagent storage and equipment-free nucleic acid amplification with simple operation steps, which could be performed by untrained users in remote settings. The proposed device consists of a sponge-based reservoir and a paper-based valve for nucleic acid extraction, an integrated battery, a PTC ultrathin heater, temperature control switch and on-chip dried enzyme mix storage for isothermal amplification, and a lateral flow test strip for naked-eye detection. It can sensitively detect Salmonella typhimurium, as a model target, with a detection limit of as low as 10 2 CFU ml -1 in wastewater and egg, and 10 3 CFU ml -1 in milk and juice in about an hour. This fully disposable and integrated paper-based device has great potential for future POC applications in resource-limited settings.

  17. An Efficient User Interface Design for Nursing Information System Based on Integrated Patient Order Information.

    PubMed

    Chu, Chia-Hui; Kuo, Ming-Chuan; Weng, Shu-Hui; Lee, Ting-Ting

    2016-01-01

    A user friendly interface can enhance the efficiency of data entry, which is crucial for building a complete database. In this study, two user interfaces (traditional pull-down menu vs. check boxes) are proposed and evaluated based on medical records with fever medication orders by measuring the time for data entry, steps for each data entry record, and the complete rate of each medical record. The result revealed that the time for data entry is reduced from 22.8 sec/record to 3.2 sec/record. The data entry procedures also have reduced from 9 steps in the traditional one to 3 steps in the new one. In addition, the completeness of medical records is increased from 20.2% to 98%. All these results indicate that the new user interface provides a more user friendly and efficient approach for data entry than the traditional interface.

  18. TRUST84. Sat-Unsat Flow in Deformable Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narasimhan, T.N.

    1984-11-01

    TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lott, P. Aaron; Woodward, Carol S.; Evans, Katherine J.

    Performing accurate and efficient numerical simulation of global atmospheric climate models is challenging due to the disparate length and time scales over which physical processes interact. Implicit solvers enable the physical system to be integrated with a time step commensurate with processes being studied. The dominant cost of an implicit time step is the ancillary linear system solves, so we have developed a preconditioner aimed at improving the efficiency of these linear system solves. Our preconditioner is based on an approximate block factorization of the linearized shallow-water equations and has been implemented within the spectral element dynamical core within themore » Community Atmospheric Model (CAM-SE). Furthermore, in this paper we discuss the development and scalability of the preconditioner for a suite of test cases with the implicit shallow-water solver within CAM-SE.« less

  20. Innovative method and equipment for personalized ventilation.

    PubMed

    Kalmár, F

    2015-06-01

    At the University of Debrecen, a new method and equipment for personalized ventilation has been developed. This equipment makes it possible to change the airflow direction during operation with a time frequency chosen by the user. The developed office desk with integrated air ducts and control system permits ventilation with 100% outdoor air, 100% recirculated air, or a mix of outdoor and recirculated air in a relative proportion set by the user. It was shown that better comfort can be assured in hot environments if the fresh airflow direction is variable. Analyzing the time step of airflow direction changing, it was found that women prefer smaller time steps and their votes related to thermal comfort sensation are higher than men's votes. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Bifurcation analysis of a discrete-time ratio-dependent predator-prey model with Allee Effect

    NASA Astrophysics Data System (ADS)

    Cheng, Lifang; Cao, Hongjun

    2016-09-01

    A discrete-time predator-prey model with Allee effect is investigated in this paper. We consider the strong and the weak Allee effect (the population growth rate is negative and positive at low population density, respectively). From the stability analysis and the bifurcation diagrams, we get that the model with Allee effect (strong or weak) growth function and the model with logistic growth function have somewhat similar bifurcation structures. If the predator growth rate is smaller than its death rate, two species cannot coexist due to having no interior fixed points. When the predator growth rate is greater than its death rate and other parameters are fixed, the model can have two interior fixed points. One is always unstable, and the stability of the other is determined by the integral step size, which decides the species coexistence or not in some extent. If we increase the value of the integral step size, then the bifurcated period doubled orbits or invariant circle orbits may arise. So the numbers of the prey and the predator deviate from one stable state and then circulate along the period orbits or quasi-period orbits. When the integral step size is increased to a critical value, chaotic orbits may appear with many uncertain period-windows, which means that the numbers of prey and predator will be chaotic. In terms of bifurcation diagrams and phase portraits, we know that the complexity degree of the model with strong Allee effect decreases, which is related to the fact that the persistence of species can be determined by the initial species densities.

  2. Addiction, Family Treatment, and Healing Resources: An Interview with David Berenson.

    ERIC Educational Resources Information Center

    Morgan, Oliver J.

    1998-01-01

    Interviews Berenson on his distinctive approach to therapy with families and couples affected by addiction and provides references. Considers background and theoretical influences, and changes over time. Discusses the use of "phasing," collaboration with Twelve Step programs, and integration of a spiritual perspective into family and…

  3. INPUFF: A SINGLE SOURCE GAUSSIAN PUFF DISPERSION ALGORITHM. USER'S GUIDE

    EPA Science Inventory

    INPUFF is a Gaussian INtegrated PUFF model. The Gaussian puff diffusion equation is used to compute the contribution to the concentration at each receptor from each puff every time step. Computations in INPUFF can be made for a single point source at up to 25 receptor locations. ...

  4. Elementary ELA/Social Studies Integration: Challenges and Limitations

    ERIC Educational Resources Information Center

    Heafner, Tina L.

    2018-01-01

    Adding instructional time and holding teachers accountable for teaching social studies are touted as practical, logical steps toward reforming the age-old tradition of marginalization. This qualitative case study of an urban elementary school, examines how nine teachers and one administrator enacted district reforms that added 45 minutes to the…

  5. The area-time-integral technique to estimate convective rain volumes over areas applied to satellite data - A preliminary investigation

    NASA Technical Reports Server (NTRS)

    Doneaud, Andre A.; Miller, James R., Jr.; Johnson, L. Ronald; Vonder Haar, Thomas H.; Laybe, Patrick

    1987-01-01

    The use of the area-time-integral (ATI) technique, based only on satellite data, to estimate convective rain volume over a moving target is examined. The technique is based on the correlation between the radar echo area coverage integrated over the lifetime of the storm and the radar estimated rain volume. The processing of the GOES and radar data collected in 1981 is described. The radar and satellite parameters for six convective clusters from storm events occurring on June 12 and July 2, 1981 are analyzed and compared in terms of time steps and cluster lifetimes. Rain volume is calculated by first using the regression analysis to generate the regression equation used to obtain the ATI; the ATI versus rain volume relation is then employed to compute rain volume. The data reveal that the ATI technique using satellite data is applicable to the calculation of rain volume.

  6. A computational method for sharp interface advection

    PubMed Central

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  7. Real-time, interactive animation of deformable two- and three-dimensional objects

    DOEpatents

    Desbrun, Mathieu; Schroeder, Peter; Meyer, Mark; Barr, Alan H.

    2003-06-03

    A method of updating in real-time the locations and velocities of mass points of a two- or three-dimensional object represented by a mass-spring system. A modified implicit Euler integration scheme is employed to determine the updated locations and velocities. In an optional post-integration step, the updated locations are corrected to preserve angular momentum. A processor readable medium and a network server each tangibly embodying the method are also provided. A system comprising a processor in combination with the medium, and a system comprising the server in combination with a client for accessing the server over a computer network, are also provided.

  8. Addressable-Matrix Integrated-Circuit Test Structure

    NASA Technical Reports Server (NTRS)

    Sayah, Hoshyar R.; Buehler, Martin G.

    1991-01-01

    Method of quality control based on use of row- and column-addressable test structure speeds collection of data on widths of resistor lines and coverage of steps in integrated circuits. By use of straightforward mathematical model, line widths and step coverages deduced from measurements of electrical resistances in each of various combinations of lines, steps, and bridges addressable in test structure. Intended for use in evaluating processes and equipment used in manufacture of application-specific integrated circuits.

  9. Fabrication and characterization of lithographically patterned and optically transparent anodic aluminum Oxide (AAO) nanostructure thin film.

    PubMed

    He, Yuan; Li, Xiang; Que, Long

    2012-10-01

    Optically transparent anodic aluminum oxide (AAO) nanostructure thin film has been successfully fabricated from lithographically patterned aluminum on indium tin oxide (ITO) glass substrates for the first time, indicating the feasibility to integrate the AAO nanostructures with microdevices or microfluidics for a variety of applications. Both one-step and two-step anodization processes using sulfuric acid and oxalic acid have been utilized for fabricating the AAO nanostructure thin film. The optical properties of the fabricated AAO nanostructure thin film have been evaluated and analyzed.

  10. METHOD OF MEASURING THE INTEGRATED ENERGY OUTPUT OF A NEUTRONIC CHAIN REACTOR

    DOEpatents

    Sturm, W.J.

    1958-12-01

    A method is presented for measuring the integrated energy output of a reactor conslsting of the steps of successively irradiating calibrated thin foils of an element, such as gold, which is rendered radioactive by exposure to neutron flux for periods of time not greater than one-fifth the mean life of the induced radioactlvity and producing an indication of the radioactivity induced in each foil, each foil belng introduced into the reactor immediately upon removal of its predecessor.

  11. The CanOE strategy: integrating genomic and metabolic contexts across multiple prokaryote genomes to find candidate genes for orphan enzymes.

    PubMed

    Smith, Adam Alexander Thil; Belda, Eugeni; Viari, Alain; Medigue, Claudine; Vallenet, David

    2012-05-01

    Of all biochemically characterized metabolic reactions formalized by the IUBMB, over one out of four have yet to be associated with a nucleic or protein sequence, i.e. are sequence-orphan enzymatic activities. Few bioinformatics annotation tools are able to propose candidate genes for such activities by exploiting context-dependent rather than sequence-dependent data, and none are readily accessible and propose result integration across multiple genomes. Here, we present CanOE (Candidate genes for Orphan Enzymes), a four-step bioinformatics strategy that proposes ranked candidate genes for sequence-orphan enzymatic activities (or orphan enzymes for short). The first step locates "genomic metabolons", i.e. groups of co-localized genes coding proteins catalyzing reactions linked by shared metabolites, in one genome at a time. These metabolons can be particularly helpful for aiding bioanalysts to visualize relevant metabolic data. In the second step, they are used to generate candidate associations between un-annotated genes and gene-less reactions. The third step integrates these gene-reaction associations over several genomes using gene families, and summarizes the strength of family-reaction associations by several scores. In the final step, these scores are used to rank members of gene families which are proposed for metabolic reactions. These associations are of particular interest when the metabolic reaction is a sequence-orphan enzymatic activity. Our strategy found over 60,000 genomic metabolons in more than 1,000 prokaryote organisms from the MicroScope platform, generating candidate genes for many metabolic reactions, of which more than 70 distinct orphan reactions. A computational validation of the approach is discussed. Finally, we present a case study on the anaerobic allantoin degradation pathway in Escherichia coli K-12.

  12. Starting with Worldviews: A Five-Step Preparatory Approach to Integrative Interdisciplinary Learning

    ERIC Educational Resources Information Center

    Augsburg, Tanya; Chitewere, Tendai

    2013-01-01

    In this article we propose a five-step sequenced approach to integrative interdisciplinary learning in undergraduate gateway courses. Drawing from the literature of interdisciplinarity, transformative learning theory, and theories of reflective learning, we utilize a sequence of five steps early in our respective undergraduate gateway courses to…

  13. Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics

    NASA Astrophysics Data System (ADS)

    d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.

    2018-05-01

    Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.

  14. Unsteady three-dimensional thermal field prediction in turbine blades using nonlinear BEM

    NASA Technical Reports Server (NTRS)

    Martin, Thomas J.; Dulikravich, George S.

    1993-01-01

    A time-and-space accurate and computationally efficient fully three dimensional unsteady temperature field analysis computer code has been developed for truly arbitrary configurations. It uses boundary element method (BEM) formulation based on an unsteady Green's function approach, multi-point Gaussian quadrature spatial integration on each panel, and a highly clustered time-step integration. The code accepts either temperatures or heat fluxes as boundary conditions that can vary in time on a point-by-point basis. Comparisons of the BEM numerical results and known analytical unsteady results for simple shapes demonstrate very high accuracy and reliability of the algorithm. An example of computed three dimensional temperature and heat flux fields in a realistically shaped internally cooled turbine blade is also discussed.

  15. Real-time micro-modelling of city evacuations

    NASA Astrophysics Data System (ADS)

    Löhner, Rainald; Haug, Eberhard; Zinggerling, Claudio; Oñate, Eugenio

    2018-01-01

    A methodology to integrate geographical information system (GIS) data with large-scale pedestrian simulations has been developed. Advances in automatic data acquisition and archiving from GIS databases, automatic input for pedestrian simulations, as well as scalable pedestrian simulation tools have made it possible to simulate pedestrians at the individual level for complete cities in real time. An example that simulates the evacuation of the city of Barcelona demonstrates that this is now possible. This is the first step towards a fully integrated crowd prediction and management tool that takes into account not only data gathered in real time from cameras, cell phones or other sensors, but also merges these with advanced simulation tools to predict the future state of the crowd.

  16. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  17. Integration of design and manufacturing in a virtual enterprise using enterprise rules, intelligent agents, STEP, and work flow

    NASA Astrophysics Data System (ADS)

    Gilman, Charles R.; Aparicio, Manuel; Barry, J.; Durniak, Timothy; Lam, Herman; Ramnath, Rajiv

    1997-12-01

    An enterprise's ability to deliver new products quickly and efficiently to market is critical for competitive success. While manufactureres recognize the need for speed and flexibility to compete in this market place, companies do not have the time or capital to move to new automation technologies. The National Industrial Information Infrastructure Protocols Consortium's Solutions for MES Adaptable Replicable Technology (NIIIP SMART) subgroup is developing an information infrastructure to enable the integration and interoperation among Manufacturing Execution Systems (MES) and Enterprise Information Systems within an enterprise or among enterprises. The goal of these developments is an adaptable, affordable, reconfigurable, integratable manufacturing system. Key innovative aspects of NIIIP SMART are: (1) Design of an industry standard object model that represents the diverse aspects of MES. (2) Design of a distributed object network to support real-time information sharing. (3) Product data exchange based on STEP and EXPRESS (ISO 10303). (4) Application of workflow and knowledge management technologies to enact manufacturing and business procedures and policy. (5) Application of intelligent agents to support emergent factories. This paper illustrates how these technologies have been incorporated into the NIIIP SMART system architecture to enable the integration and interoperation of existing tools and future MES applications in a 'plug and play' environment.

  18. Enzyme inhibition studies by integrated Michaelis-Menten equation considering simultaneous presence of two inhibitors when one of them is a reaction product.

    PubMed

    Bezerra, Rui M F; Pinto, Paula A; Fraga, Irene; Dias, Albino A

    2016-03-01

    To determine initial velocities of enzyme catalyzed reactions without theoretical errors it is necessary to consider the use of the integrated Michaelis-Menten equation. When the reaction product is an inhibitor, this approach is particularly important. Nevertheless, kinetic studies usually involved the evaluation of other inhibitors beyond the reaction product. The occurrence of these situations emphasizes the importance of extending the integrated Michaelis-Menten equation, assuming the simultaneous presence of more than one inhibitor because reaction product is always present. This methodology is illustrated with the reaction catalyzed by alkaline phosphatase inhibited by phosphate (reaction product, inhibitor 1) and urea (inhibitor 2). The approach is explained in a step by step manner using an Excel spreadsheet (available as a template in Appendix). Curve fitting by nonlinear regression was performed with the Solver add-in (Microsoft Office Excel). Discrimination of the kinetic models was carried out based on Akaike information criterion. This work presents a methodology that can be used to develop an automated process, to discriminate in real time the inhibition type and kinetic constants as data (product vs. time) are achieved by the spectrophotometer. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Fiber-integrated refractive index sensor based on a diced Fabry-Perot micro-resonator.

    PubMed

    Suntsov, Sergiy; Rüter, Christian E; Schipkowski, Tom; Kip, Detlef

    2017-11-20

    We report on a fiber-integrated refractive index sensor based on a Fabry-Perot micro-resonator fabricated using simple diamond blade dicing of a single-mode step-index fiber. The performance of the device has been tested for the refractive index measurements of sucrose solutions as well as in air. The device shows a sensitivity of 1160 nm/RIU (refractive index unit) at a wavelength of 1.55 μm and a temperature cross-sensitivity of less than 10 -7   RIU/°C. Based on evaluation of the broadband reflection spectra, refractive index steps of 10 -5 of the solutions were accurately measured. The conducted coating of the resonator sidewalls with layers of a high-index material with real-time reflection spectrum monitoring could help to significantly improve the sensor performance.

  20. Comparative Analysis of Models of the Earth's Gravity: 3. Accuracy of Predicting EAS Motion

    NASA Astrophysics Data System (ADS)

    Kuznetsov, E. D.; Berland, V. E.; Wiebe, Yu. S.; Glamazda, D. V.; Kajzer, G. T.; Kolesnikov, V. I.; Khremli, G. P.

    2002-05-01

    This paper continues a comparative analysis of modern satellite models of the Earth's gravity which we started in [6, 7]. In the cited works, the uniform norms of spherical functions were compared with their gradients for individual harmonics of the geopotential expansion [6] and the potential differences were compared with the gravitational accelerations obtained in various models of the Earth's gravity [7]. In practice, it is important to know how consistently the EAS motion is represented by various geopotential models. Unless otherwise stated, a model version in which the equations of motion are written using the classical Encke scheme and integrated together with the variation equations by the implicit one-step Everhart's algorithm [1] was used. When calculating coordinates and velocities on the integration step (at given instants of time), the approximate Everhart formula was employed.

  1. Ab initio molecular dynamics with nuclear quantum effects at classical cost: Ring polymer contraction for density functional theory.

    PubMed

    Marsalek, Ondrej; Markland, Thomas E

    2016-02-07

    Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding as a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.

  2. On the stability of projection methods for the incompressible Navier-Stokes equations based on high-order discontinuous Galerkin discretizations

    NASA Astrophysics Data System (ADS)

    Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-12-01

    The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.

  3. Auditory Proprioceptive Integration: Effects of Real-Time Kinematic Auditory Feedback on Knee Proprioception

    PubMed Central

    Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.

    2018-01-01

    The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259

  4. Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume

    2016-03-01

    Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.

  5. Getting the message across: using ecological integrity to communicate with resource managers

    USGS Publications Warehouse

    Mitchell, Brian R.; Tierney, Geraldine L.; Schweiger, E. William; Miller, Kathryn M.; Faber-Langendoen, Don; Grace, James B.

    2014-01-01

    This chapter describes and illustrates how concepts of ecological integrity, thresholds, and reference conditions can be integrated into a research and monitoring framework for natural resource management. Ecological integrity has been defined as a measure of the composition, structure, and function of an ecosystem in relation to the system’s natural or historical range of variation, as well as perturbations caused by natural or anthropogenic agents of change. Using ecological integrity to communicate with managers requires five steps, often implemented iteratively: (1) document the scale of the project and the current conceptual understanding and reference conditions of the ecosystem, (2) select appropriate metrics representing integrity, (3) define externally verified assessment points (metric values that signify an ecological change or need for management action) for the metrics, (4) collect data and calculate metric scores, and (5) summarize the status of the ecosystem using a variety of reporting methods. While we present the steps linearly for conceptual clarity, actual implementation of this approach may require addressing the steps in a different order or revisiting steps (such as metric selection) multiple times as data are collected. Knowledge of relevant ecological thresholds is important when metrics are selected, because thresholds identify where small changes in an environmental driver produce large responses in the ecosystem. Metrics with thresholds at or just beyond the limits of a system’s range of natural variability can be excellent, since moving beyond the normal range produces a marked change in their values. Alternatively, metrics with thresholds within but near the edge of the range of natural variability can serve as harbingers of potential change. Identifying thresholds also contributes to decisions about selection of assessment points. In particular, if there is a significant resistance to perturbation in an ecosystem, with threshold behavior not occurring until well beyond the historical range of variation, this may provide a scientific basis for shifting an ecological assessment point beyond the historical range. We present two case studies using ongoing monitoring by the US National Park Service Vital Signs program that illustrate the use of an ecological integrity approach to communicate ecosystem status to resource managers. The Wetland Ecological Integrity in Rocky Mountain National Park case study uses an analytical approach that specifically incorporates threshold detection into the process of establishing assessment points. The Forest Ecological Integrity of Northeastern National Parks case study describes a method for reporting ecological integrity to resource managers and other decision makers. We believe our approach has the potential for wide applicability for natural resource management.

  6. Real time implementation and control validation of the wind energy conversion system

    NASA Astrophysics Data System (ADS)

    Sattar, Adnan

    The purpose of the thesis is to analyze dynamic and transient characteristics of wind energy conversion systems including the stability issues in real time environment using the Real Time Digital Simulator (RTDS). There are different power system simulation tools available in the market. Real time digital simulator (RTDS) is one of the powerful tools among those. RTDS simulator has a Graphical User Interface called RSCAD which contains detail component model library for both power system and control relevant analysis. The hardware is based upon the digital signal processors mounted in the racks. RTDS simulator has the advantage of interfacing the real world signals from the external devices, hence used to test the protection and control system equipments. Dynamic and transient characteristics of the fixed and variable speed wind turbine generating systems (WTGSs) are analyzed, in this thesis. Static Synchronous Compensator (STATCOM) as a flexible ac transmission system (FACTS) device is used to enhance the fault ride through (FRT) capability of the fixed speed wind farm. Two level voltage source converter based STATCOM is modeled in both VSC small time-step and VSC large time-step of RTDS. The simulation results of the RTDS model system are compared with the off-line EMTP software i.e. PSCAD/EMTDC. A new operational scheme for a MW class grid-connected variable speed wind turbine driven permanent magnet synchronous generator (VSWT-PMSG) is developed. VSWT-PMSG uses fully controlled frequency converters for the grid interfacing and thus have the ability to control the real and reactive powers simultaneously. Frequency converters are modeled in the VSC small time-step of the RTDS and three phase realistic grid is adopted with RSCAD simulation through the use of optical analogue digital converter (OADC) card of the RTDS. Steady state and LVRT characteristics are carried out to validate the proposed operational scheme. Simulation results show good agreement with real time simulation software and thus can be used to validate the controllers for the real time operation. Integration of the Battery Energy Storage System (BESS) with wind farm can smoothen its intermittent power fluctuations. The work also focuses on the real time implementation of the Sodium Sulfur (NaS) type BESS. BESS is integrated with the STATCOM. The main advantage of this system is that it can also provide the reactive power support to the system along with the real power exchange from BESS unit. BESS integrated with STATCOM is modeled in the VSC small time-step of the RTDS. The cascaded vector control scheme is used for the control of the STATCOM and suitable control is developed to control the charging/discharging of the NaS type BESS. Results are compared with Laboratory standard power system software PSCAD/EMTDC and the advantages of using RTDS in dynamic and transient characteristics analyses of wind farm are also demonstrated clearly.

  7. Analysis of smear in high-resolution remote sensing satellites

    NASA Astrophysics Data System (ADS)

    Wahballah, Walid A.; Bazan, Taher M.; El-Tohamy, Fawzy; Fathy, Mahmoud

    2016-10-01

    High-resolution remote sensing satellites (HRRSS) that use time delay and integration (TDI) CCDs have the potential to introduce large amounts of image smear. Clocking and velocity mismatch smear are two of the key factors in inducing image smear. Clocking smear is caused by the discrete manner in which the charge is clocked in the TDI-CCDs. The relative motion between the HRRSS and the observed object obliges that the image motion velocity must be strictly synchronized with the velocity of the charge packet transfer (line rate) throughout the integration time. During imaging an object off-nadir, the image motion velocity changes resulting in asynchronization between the image velocity and the CCD's line rate. A Model for estimating the image motion velocity in HRRSS is derived. The influence of this velocity mismatch combined with clocking smear on the modulation transfer function (MTF) is investigated by using Matlab simulation. The analysis is performed for cross-track and along-track imaging with different satellite attitude angles and TDI steps. The results reveal that the velocity mismatch ratio and the number of TDI steps have a serious impact on the smear MTF; a velocity mismatch ratio of 2% degrades the MTFsmear by 32% at Nyquist frequency when the TDI steps change from 32 to 96. In addition, the results show that to achieve the requirement of MTFsmear >= 0.95 , for TDI steps of 16 and 64, the allowable roll angles are 13.7° and 6.85° and the permissible pitch angles are no more than 9.6° and 4.8°, respectively.

  8. Data Needs and Modeling of the Upper Atmosphere

    NASA Astrophysics Data System (ADS)

    Brunger, M. J.; Campbell, L.

    2007-04-01

    We present results from our enhanced statistical equilibrium and time-step codes for atmospheric modeling. In particular we use these results to illustrate the role of electron-driven processes in atmospheric phenomena and the sensitivity of the model results to data inputs such as integral cross sections, dissociative recombination rates and chemical reaction rates.

  9. Geospatial Analysis and Model Evaluation Software (GAMES): Integrated Web-Based Analysis and Visualization

    DTIC Science & Technology

    2014-04-11

    particle location files for each source (hours) dti : time step in seconds horzmix: CONSTANT = use the value of horcon...however, if leg lengths are short. Extreme values of D/Lo can occur. We will handle these by assigning a maximum to the output. This is discussed by

  10. Technical Performance Measurement, Earned Value, and Risk Management: An Integrated Diagnostic Tool for Program Management

    DTIC Science & Technology

    2002-06-01

    time, the monkey would eventually produce the collected works of Shakespeare . Unfortunately for the analogist, systems, even live ones, do not work...limited his simulated computer monkey to producing, in a single random step, the sentence uttered by Polonius in the play Hamlet : “Methinks it is

  11. Automation, consolidation, and integration in autoimmune diagnostics.

    PubMed

    Tozzoli, Renato; D'Aurizio, Federica; Villalta, Danilo; Bizzaro, Nicola

    2015-08-01

    Over the past two decades, we have witnessed an extraordinary change in autoimmune diagnostics, characterized by the progressive evolution of analytical technologies, the availability of new tests, and the explosive growth of molecular biology and proteomics. Aside from these huge improvements, organizational changes have also occurred which brought about a more modern vision of the autoimmune laboratory. The introduction of automation (for harmonization of testing, reduction of human error, reduction of handling steps, increase of productivity, decrease of turnaround time, improvement of safety), consolidation (combining different analytical technologies or strategies on one instrument or on one group of connected instruments) and integration (linking analytical instruments or group of instruments with pre- and post-analytical devices) opened a new era in immunodiagnostics. In this article, we review the most important changes that have occurred in autoimmune diagnostics and present some models related to the introduction of automation in the autoimmunology laboratory, such as automated indirect immunofluorescence and changes in the two-step strategy for detection of autoantibodies; automated monoplex immunoassays and reduction of turnaround time; and automated multiplex immunoassays for autoantibody profiling.

  12. MSFC Stream Model Preliminary Results: Modeling the 1998-2002 Leonid Encounters and the 1993,1994, and 2004 Perseid Encounters

    NASA Technical Reports Server (NTRS)

    Moser, D. E.; Cooke, W. J.

    2004-01-01

    The cometary meteoroid ejection models of Jones (1996) and Crifo (1997) were used to simulate ejection from comets 55P/Tempel-Tuttle during the last 12 revolutions, and the 1862, 1737, and 161 0 apparitions of 1 OSP/Swift-Tuttle. Using cometary ephemerides generated by the JPL HORIZONS Solar System Data and Ephemeris Computation Service, ejection was simulated in 1 hour time steps while the comet was within 2.5 AU of the Sun. Also simulated was ejection occurring at the hour of perihelion passage. An RK4 variable step integrator was then used to integrate meteoroid position and velocity forward in time, accounting for the effects of radiation pressure, Poynting-Robertson drag, and the gravitational forces of the planets, which were computed using JPL's DE406 planetary ephemerides. An impact parameter is computed for each particle approaching the Earth, and the results are compared to observations of the 1998-2002 Leonid showers, and the 1993-1 994 Perseids. A prediction for Earth's encounter with the Perseid stream in 2004 is also presented.

  13. Integrated continuous dissolution, refolding and tag removal of fusion proteins from inclusion bodies in a tubular reactor.

    PubMed

    Pan, Siqi; Zelger, Monika; Jungbauer, Alois; Hahn, Rainer

    2014-09-20

    An integrated continuous tubular reactor system was developed for processing an autoprotease expressed as inclusion bodies. The inclusion bodies were suspended and fed into the tubular reactor system for continuous dissolving, refolding and precipitation. During refolding, the dissolved autoprotease cleaves itself, separating the fusion tag from the target peptide. Subsequently, the cleaved fusion tag and any uncleaved autoprotease were precipitated out in the precipitation step. The processed exiting solution results in the purified soluble target peptide. Refolding and precipitation yields performed in the tubular reactor were similar to batch reactor and process was stable for at least 20 h. The authenticity of purified peptide was also verified by mass spectroscopy. Productivity (in mg/l/h and mg/h) calculated in the tubular process was twice and 1.5 times of the batch process, respectively. Although it is more complex to setup a tubular than a batch reactor, it offers faster mixing, higher productivity and better integration to other bioprocessing steps. With increasing interest of integrated continuous biomanufacturing, the use of tubular reactors in industrial settings offers clear advantages. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Method for network analyzation and apparatus

    DOEpatents

    Bracht, Roger B.; Pasquale, Regina V.

    2001-01-01

    A portable network analyzer and method having multiple channel transmit and receive capability for real-time monitoring of processes which maintains phase integrity, requires low power, is adapted to provide full vector analysis, provides output frequencies of up to 62.5 MHz and provides fine sensitivity frequency resolution. The present invention includes a multi-channel means for transmitting and a multi-channel means for receiving, both in electrical communication with a software means for controlling. The means for controlling is programmed to provide a signal to a system under investigation which steps consecutively over a range of predetermined frequencies. The resulting received signal from the system provides complete time domain response information by executing a frequency transform of the magnitude and phase information acquired at each frequency step.

  15. Simulations of precipitation using the Community Earth System Model (CESM): Sensitivity to microphysics time step

    NASA Astrophysics Data System (ADS)

    Murthi, A.; Menon, S.; Sednev, I.

    2011-12-01

    An inherent difficulty in the ability of global climate models to accurately simulate precipitation lies in the use of a large time step, Δt (usually 30 minutes), to solve the governing equations. Since microphysical processes are characterized by small time scales compared to Δt, finite difference approximations used to advance microphysics equations suffer from numerical instability and large time truncation errors. With this in mind, the sensitivity of precipitation simulated by the atmospheric component of CESM, namely the Community Atmosphere Model (CAM 5.1), to the microphysics time step (τ) is investigated. Model integrations are carried out for a period of five years with a spin up time of about six months for a horizontal resolution of 2.5 × 1.9 degrees and 30 levels in the vertical, with Δt = 1800 s. The control simulation with τ = 900 s is compared with one using τ = 300 s for accumulated precipitation and radi- ation budgets at the surface and top of the atmosphere (TOA), while keeping Δt fixed. Our choice of τ = 300 s is motivated by previous work on warm rain processes wherein it was shown that a value of τ around 300 s was necessary, but not sufficient, to ensure positive definiteness and numerical stability of the explicit time integration scheme used to integrate the microphysical equations. However, since the entire suite of microphysical processes are represented in our case, we suspect that this might impose additional restrictions on τ. The τ = 300 s case produces differences in large-scale accumulated rainfall from the τ = 900 s case by as large as 200 mm, over certain regions of the globe. The spatial patterns of total accumulated precipitation using τ = 300 s are in closer agreement with satellite observed precipitation, when compared to the τ = 900 s case. Differences are also seen in the radiation budget with the τ = 300 (900) s cases producing surpluses that range between 1-3 W/m2 at both the TOA and surface in the global means. In order to gain some insight into the possible causes of the observed differences, future work would involve performing additional sensitivity tests using the single column model version of CAM 5.1 to gauge the effect of τ on calculations of source terms and mixing ratios used to calculate precipitation in the budget equations.

  16. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  17. The treatment of climate science in Integrated Assessment Modelling: integration of climate step function response in an energy system integrated assessment model.

    NASA Astrophysics Data System (ADS)

    Dessens, Olivier

    2016-04-01

    Integrated Assessment Models (IAMs) are used as crucial inputs to policy-making on climate change. These models simulate aspect of the economy and climate system to deliver future projections and to explore the impact of mitigation and adaptation policies. The IAMs' climate representation is extremely important as it can have great influence on future political action. The step-function-response is a simple climate model recently developed by the UK Met Office and is an alternate method of estimating the climate response to an emission trajectory directly from global climate model step simulations. Good et al., (2013) have formulated a method of reconstructing general circulation models (GCMs) climate response to emission trajectories through an idealized experiment. This method is called the "step-response approach" after and is based on an idealized abrupt CO2 step experiment results. TIAM-UCL is a technology-rich model that belongs to the family of, partial-equilibrium, bottom-up models, developed at University College London to represent a wide spectrum of energy systems in 16 regions of the globe (Anandarajah et al. 2011). The model uses optimisation functions to obtain cost-efficient solutions, in meeting an exogenously defined set of energy-service demands, given certain technological and environmental constraints. Furthermore, it employs linear programming techniques making the step function representation of the climate change response adapted to the model mathematical formulation. For the first time, we have introduced the "step-response approach" method developed at the UK Met Office in an IAM, the TIAM-UCL energy system, and we investigate the main consequences of this modification on the results of the model in term of climate and energy system responses. The main advantage of this approach (apart from the low computational cost it entails) is that its results are directly traceable to the GCM involved and closely connected to well-known methods of analysing GCMs with the step-experiments. Acknowledgments: This work is supported by the FP7 HELIX project (www.helixclimate.eu) References: Anandarajah, G., Pye, S., Usher, W., Kesicki, F., & Mcglade, C. (2011). TIAM-UCL Global model documentation. https://www.ucl.ac.uk/energy-models/models/tiam-ucl/tiam-ucl-manual Good, P., Gregory, J. M., Lowe, J. A., & Andrews, T. (2013). Abrupt CO2 experiments as tools for predicting and understanding CMIP5 representative concentration pathway projections. Climate Dynamics, 40(3-4), 1041-1053.

  18. Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems

    NASA Technical Reports Server (NTRS)

    Cerro, J. A.; Scotti, S. J.

    1991-01-01

    Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.

  19. Finite element implementation of state variable-based viscoplasticity models

    NASA Technical Reports Server (NTRS)

    Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.

    1991-01-01

    The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.

  20. Step-by-step integration for fractional operators

    NASA Astrophysics Data System (ADS)

    Colinas-Armijo, Natalia; Di Paola, Mario

    2018-06-01

    In this paper, an approach based on the definition of the Riemann-Liouville fractional operators is proposed in order to provide a different discretisation technique as alternative to the Grünwald-Letnikov operators. The proposed Riemann-Liouville discretisation consists of performing step-by-step integration based upon the discretisation of the function f(t). It has been shown that, as f(t) is discretised as stepwise or piecewise function, the Riemann-Liouville fractional integral and derivative are governing by operators very similar to the Grünwald-Letnikov operators. In order to show the accuracy and capabilities of the proposed Riemann-Liouville discretisation technique and the Grünwald-Letnikov discrete operators, both techniques have been applied to: unit step functions, exponential functions and sample functions of white noise.

  1. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Two-step interrogation then recognition of DNA binding site by Integration Host Factor: an architectural DNA-bending protein.

    PubMed

    Velmurugu, Yogambigai; Vivas, Paula; Connolly, Mitchell; Kuznetsov, Serguei V; Rice, Phoebe A; Ansari, Anjum

    2018-02-28

    The dynamics and mechanism of how site-specific DNA-bending proteins initially interrogate potential binding sites prior to recognition have remained elusive for most systems. Here we present these dynamics for Integration Host factor (IHF), a nucleoid-associated architectural protein, using a μs-resolved T-jump approach. Our studies show two distinct DNA-bending steps during site recognition by IHF. While the faster (∼100 μs) step is unaffected by changes in DNA or protein sequence that alter affinity by >100-fold, the slower (1-10 ms) step is accelerated ∼5-fold when mismatches are introduced at DNA sites that are sharply kinked in the specific complex. The amplitudes of the fast phase increase when the specific complex is destabilized and decrease with increasing [salt], which increases specificity. Taken together, these results indicate that the fast phase is non-specific DNA bending while the slow phase, which responds only to changes in DNA flexibility at the kink sites, is specific DNA kinking during site recognition. Notably, the timescales for the fast phase overlap with one-dimensional diffusion times measured for several proteins on DNA, suggesting that these dynamics reflect partial DNA bending during interrogation of potential binding sites by IHF as it scans DNA.

  3. A combined application of boundary-element and Runge-Kutta methods in three-dimensional elasticity and poroelasticity

    NASA Astrophysics Data System (ADS)

    Igumnov, Leonid; Ipatov, Aleksandr; Belov, Aleksandr; Petrov, Andrey

    2015-09-01

    The report presents the development of the time-boundary element methodology and a description of the related software based on a stepped method of numerical inversion of the integral Laplace transform in combination with a family of Runge-Kutta methods for analyzing 3-D mixed initial boundary-value problems of the dynamics of inhomogeneous elastic and poro-elastic bodies. The results of the numerical investigation are presented. The investigation methodology is based on direct-approach boundary integral equations of 3-D isotropic linear theories of elasticity and poroelasticity in Laplace transforms. Poroelastic media are described using Biot models with four and five base functions. With the help of the boundary-element method, solutions in time are obtained, using the stepped method of numerically inverting Laplace transform on the nodes of Runge-Kutta methods. The boundary-element method is used in combination with the collocation method, local element-by-element approximation based on the matched interpolation model. The results of analyzing wave problems of the effect of a non-stationary force on elastic and poroelastic finite bodies, a poroelastic half-space (also with a fictitious boundary) and a layered half-space weakened by a cavity, and a half-space with a trench are presented. Excitation of a slow wave in a poroelastic medium is studied, using the stepped BEM-scheme on the nodes of Runge-Kutta methods.

  4. Two-dimensional Euler and Navier-Stokes Time accurate simulations of fan rotor flows

    NASA Technical Reports Server (NTRS)

    Boretti, A. A.

    1990-01-01

    Two numerical methods are presented which describe the unsteady flow field in the blade-to-blade plane of an axial fan rotor. These methods solve the compressible, time-dependent, Euler and the compressible, turbulent, time-dependent, Navier-Stokes conservation equations for mass, momentum, and energy. The Navier-Stokes equations are written in Favre-averaged form and are closed with an approximate two-equation turbulence model with low Reynolds number and compressibility effects included. The unsteady aerodynamic component is obtained by superposing inflow or outflow unsteadiness to the steady conditions through time-dependent boundary conditions. The integration in space is performed by using a finite volume scheme, and the integration in time is performed by using k-stage Runge-Kutta schemes, k = 2,5. The numerical integration algorithm allows the reduction of the computational cost of an unsteady simulation involving high frequency disturbances in both CPU time and memory requirements. Less than 200 sec of CPU time are required to advance the Euler equations in a computational grid made up of about 2000 grid during 10,000 time steps on a CRAY Y-MP computer, with a required memory of less than 0.3 megawords.

  5. A spectral radius scaling semi-implicit iterative time stepping method for reactive flow simulations with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin

    2018-09-01

    A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.

  6. Design, Development and Testing of Web Services for Multi-Sensor Snow Cover Mapping

    NASA Astrophysics Data System (ADS)

    Kadlec, Jiri

    This dissertation presents the design, development and validation of new data integration methods for mapping the extent of snow cover based on open access ground station measurements, remote sensing images, volunteer observer snow reports, and cross country ski track recordings from location-enabled mobile devices. The first step of the data integration procedure includes data discovery, data retrieval, and data quality control of snow observations at ground stations. The WaterML R package developed in this work enables hydrologists to retrieve and analyze data from multiple organizations that are listed in the Consortium of Universities for the Advancement of Hydrologic Sciences Inc (CUAHSI) Water Data Center catalog directly within the R statistical software environment. Using the WaterML R package is demonstrated by running an energy balance snowpack model in R with data inputs from CUAHSI, and by automating uploads of real time sensor observations to CUAHSI HydroServer. The second step of the procedure requires efficient access to multi-temporal remote sensing snow images. The Snow Inspector web application developed in this research enables the users to retrieve a time series of fractional snow cover from the Moderate Resolution Imaging Spectroradiometer (MODIS) for any point on Earth. The time series retrieval method is based on automated data extraction from tile images provided by a Web Map Tile Service (WMTS). The average required time for retrieving 100 days of data using this technique is 5.4 seconds, which is significantly faster than other methods that require the download of large satellite image files. The presented data extraction technique and space-time visualization user interface can be used as a model for working with other multi-temporal hydrologic or climate data WMTS services. The third, final step of the data integration procedure is generating continuous daily snow cover maps. A custom inverse distance weighting method has been developed to combine volunteer snow reports, cross-country ski track reports and station measurements to fill cloud gaps in the MODIS snow cover product. The method is demonstrated by producing a continuous daily time step snow presence probability map dataset for the Czech Republic region. The ability of the presented methodology to reconstruct MODIS snow cover under cloud is validated by simulating cloud cover datasets and comparing estimated snow cover to actual MODIS snow cover. The percent correctly classified indicator showed accuracy between 80 and 90% using this method. Using crowdsourcing data (volunteer snow reports and ski tracks) improves the map accuracy by 0.7--1.2%. The output snow probability map data sets are published online using web applications and web services. Keywords: crowdsourcing, image analysis, interpolation, MODIS, R statistical software, snow cover, snowpack probability, Tethys platform, time series, WaterML, web services, winter sports.

  7. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  8. A computer software system for the generation of global ocean tides including self-gravitation and crustal loading effects

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1977-01-01

    A computer software system is described which computes global numerical solutions of the integro-differential Laplace tidal equations, including dissipation terms and ocean loading and self-gravitation effects, for arbitrary diurnal and semidiurnal tidal constituents. The integration algorithm features a successive approximation scheme for the integro-differential system, with time stepping forward differences in the time variable and central differences in spatial variables.

  9. Integrating Behavioral Health Support into a Pediatric Setting: What Happens in the Exam Room?

    ERIC Educational Resources Information Center

    Cuno, Kate; Krug, Laura M.; Umylny, Polina

    2015-01-01

    This article presents an overview of the Healthy Steps for Young Children (Healthy Steps) program at Montefiore Medical Center, in the Bronx, NY. The authors review the theoretical underpinnings of this national program for the promotion of early childhood mental health. The Healthy Steps program at Montefiore is integrated into outpatient…

  10. Are Pressure Time Integral and Cumulative Plantar Stress Related to First Metatarsophalangeal Joint Pain? Results From a Community-Based Study.

    PubMed

    Rao, Smita; Douglas Gross, K; Niu, Jingbo; Nevitt, Michael C; Lewis, Cora E; Torner, James C; Hietpas, Jean; Felson, David; Hillstrom, Howard J

    2016-09-01

    To examine the relationship between plantar stress over a step, cumulative plantar stress over a day, and first metatarsophalangeal (MTP) joint pain among older adults. Plantar stress and first MTP pain were assessed within the Multicenter Osteoarthritis Study. All included participants were asked if they had pain, aching, or stiffness at the first MTP joint on most days for the past 30 days. Pressure time integral (PTI) was quantified as participants walked on a pedobarograph, and mean steps per day were obtained using an accelerometer. Cumulative plantar stress was calculated as the product of regional PTI and mean steps per day. Quintiles of hallucal and second metatarsal PTI and cumulative plantar stress were generated. The relationship between predictors and the odds ratio of first MTP pain was assessed using a logistic regression model. Feet in the quintile with the lowest hallux PTI had 2.14 times increased odds of first MTP pain (95% confidence interval [95% CI] 1.42-3.25, P < 0.01). Feet in the quintile with the lowest second metatarsal PTI had 1.50 times increased odds of first MTP pain (95% CI 1.01-2.23, P = 0.042). Cumulative plantar stress was unassociated with first MTP pain. Lower PTI was modestly associated with increased prevalence of frequent first MTP pain at both the hallux and second metatarsal. Lower plantar loading may indicate the presence of an antalgic gait strategy and may reflect an attempt at pain avoidance. The lack of association with cumulative plantar stress may suggest that patients do not limit their walking as a pain-avoidance mechanism. © 2016, American College of Rheumatology.

  11. Are Pressure Time Integral and Cumulative Plantar Stress Related to First Metatarsophalangeal Joint Pain? Results From a Community-Based Study

    PubMed Central

    RAO, SMITA; GROSS, K. DOUGLAS; NIU, JINGBO; NEVITT, MICHAEL C.; LEWIS, CORA E.; TORNER, JAMES C.; HIETPAS, JEAN; FELSON, DAVID; HILLSTROM, HOWARD J.

    2017-01-01

    Objective To examine the relationship between plantar stress over a step, cumulative plantar stress over a day, and first metatarsophalangeal (MTP) joint pain among older adults. Methods Plantar stress and first MTP pain were assessed within the Multicenter Osteoarthritis Study. All included participants were asked if they had pain, aching, or stiffness at the first MTP joint on most days for the past 30 days. Pressure time integral (PTI) was quantified as participants walked on a pedobarograph, and mean steps per day were obtained using an accelerometer. Cumulative plantar stress was calculated as the product of regional PTI and mean steps per day. Quintiles of hallucal and second metatarsal PTI and cumulative plantar stress were generated. The relationship between predictors and the odds ratio of first MTP pain was assessed using a logistic regression model. Results Feet in the quintile with the lowest hallux PTI had 2.14 times increased odds of first MTP pain (95% confidence interval [95% CI] 1.42–3.25, P < 0.01). Feet in the quintile with the lowest second metatarsal PTI had 1.50 times increased odds of first MTP pain (95% CI 1.01–2.23, P = 0.042). Cumulative plantar stress was unassociated with first MTP pain. Conclusion Lower PTI was modestly associated with increased prevalence of frequent first MTP pain at both the hallux and second metatarsal. Lower plantar loading may indicate the presence of an antalgic gait strategy and may reflect an attempt at pain avoidance. The lack of association with cumulative plantar stress may suggest that patients do not limit their walking as a pain-avoidance mechanism. PMID:26713755

  12. Auxotonic to isometric contraction transitioning in a beating heart causes myosin step-size to down shift

    PubMed Central

    Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin

    2017-01-01

    Myosin motors in cardiac ventriculum convert ATP free energy to the work of moving blood volume under pressure. The actin bound motor cyclically rotates its lever-arm/light-chain complex linking motor generated torque to the myosin filament backbone and translating actin against resisting force. Previous research showed that the unloaded in vitro motor is described with high precision by single molecule mechanical characteristics including unitary step-sizes of approximately 3, 5, and 8 nm and their relative step-frequencies of approximately 13, 50, and 37%. The 3 and 8 nm unitary step-sizes are dependent on myosin essential light chain (ELC) N-terminus actin binding. Step-size and step-frequency quantitation specifies in vitro motor function including duty-ratio, power, and strain sensitivity metrics. In vivo, motors integrated into the muscle sarcomere form the more complex and hierarchically functioning muscle machine. The goal of the research reported here is to measure single myosin step-size and step-frequency in vivo to assess how tissue integration impacts motor function. A photoactivatable GFP tags the ventriculum myosin lever-arm/light-chain complex in the beating heart of a live zebrafish embryo. Detected single GFP emission reports time-resolved myosin lever-arm orientation interpreted as step-size and step-frequency providing single myosin mechanical characteristics over the active cycle. Following step-frequency of cardiac ventriculum myosin transitioning from low to high force in relaxed to auxotonic to isometric contraction phases indicates that the imposition of resisting force during contraction causes the motor to down-shift to the 3 nm step-size accounting for >80% of all the steps in the near-isometric phase. At peak force, the ATP initiated actomyosin dissociation is the predominant strain inhibited transition in the native myosin contraction cycle. The proposed model for motor down-shifting and strain sensing involves ELC N-terminus actin binding. Overall, the approach is a unique bottom-up single molecule mechanical characterization of a hierarchically functional native muscle myosin. PMID:28423017

  13. Broadband Acoustic Resonance Dissolution Spectroscopy (BARDS): A rapid test for enteric coating thickness and integrity of controlled release pellet formulations.

    PubMed

    Alfarsi, Anas; Dillon, Amy; McSweeney, Seán; Krüse, Jacob; Griffin, Brendan; Devine, Ken; Sherry, Patricia; Henken, Stephan; Fitzpatrick, Stephen; Fitzpatrick, Dara

    2018-06-10

    There are no rapid dissolution based tests for determining coating thickness, integrity and drug concentration in controlled release pellets either during production or post-production. The manufacture of pellets requires several coating steps depending on the formulation. The sub-coating and enteric coating steps typically take up to six hours each followed by additional drying steps. Post production regulatory dissolution testing also takes up to six hours to determine if the batch can be released for commercial sale. The thickness of the enteric coating is a key factor that determines the release rate of the drug in the gastro-intestinal tract. Also, the amount of drug per unit mass decreases with increasing thickness of the enteric coating. In this study, the coating process is tracked from start to finish on an hourly basis by taking samples of pellets during production and testing those using BARDS (Broadband Acoustic Resonance Dissolution Spectroscopy). BARDS offers a rapid approach to characterising enteric coatings with measurements based on reproducible changes in the compressibility of a solvent due to the evolution of air during dissolution. This is monitored acoustically via associated changes in the frequency of induced acoustic resonances. A steady state acoustic lag time is associated with the disintegration of the enteric coatings in basic solution. This lag time is pH dependent and is indicative of the rate at which the coating layer dissolves. BARDS represents a possible future surrogate test for conventional USP dissolution testing as its data correlates directly with the thickness of the enteric coating, its integrity and also with the drug loading as validated by HPLC. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. The fast multipole method and point dipole moment polarizable force fields.

    PubMed

    Coles, Jonathan P; Masella, Michel

    2015-01-14

    We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.

  15. Developing an Integrated Library Program. Professional Growth Series.

    ERIC Educational Resources Information Center

    Miller, Donna P.; Anderson, J'Lynn

    This book provides teachers, media specialists, and administrators with a step-by-step method for integrating library resources and skills into the classroom curriculum. In this method, all curriculum areas are integrated into major units of study that are team-planned, team-produced, and team-taught. Topics include: components of the program and…

  16. On time discretizations for the simulation of the batch settling-compression process in one dimension.

    PubMed

    Bürger, Raimund; Diehl, Stefan; Mejías, Camilo

    2016-01-01

    The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.

  17. Enriching the biological space of natural products and charting drug metabolites, through real time biotransformation monitoring: The NMR tube bioreactor.

    PubMed

    Chatzikonstantinou, Alexandra V; Chatziathanasiadou, Maria V; Ravera, Enrico; Fragai, Marco; Parigi, Giacomo; Gerothanassis, Ioannis P; Luchinat, Claudio; Stamatis, Haralambos; Tzakos, Andreas G

    2018-01-01

    Natural products offer a wide range of biological activities, but they are not easily integrated in the drug discovery pipeline, because of their inherent scaffold intricacy and the associated complexity in their synthetic chemistry. Enzymes may be used to perform regioselective and stereoselective incorporation of functional groups in the natural product core, avoiding harsh reaction conditions, several protection/deprotection and purification steps. Herein, we developed a three step protocol carried out inside an NMR-tube. 1st-step: STD-NMR was used to predict the: i) capacity of natural products as enzyme substrates and ii) possible regioselectivity of the biotransformations. 2nd-step: The real-time formation of multiple-biotransformation products in the NMR-tube bioreactor was monitored in-situ. 3rd-step: STD-NMR was applied in the mixture of the biotransformed products to screen ligands for protein targets. Herein, we developed a simple and time-effective process, the "NMR-tube bioreactor", that is able to: (i) predict which component of a mixture of natural products can be enzymatically transformed, (ii) monitor in situ the transformation efficacy and regioselectivity in crude extracts and multiple substrate biotransformations without fractionation and (iii) simultaneously screen for interactions of the biotransformation products with pharmaceutical protein targets. We have developed a green, time-, and cost-effective process that provide a simple route from natural products to lead compounds for drug discovery. This process can speed up the most crucial steps in the early drug discovery process, and reduce the chemical manipulations usually involved in the pipeline, improving the environmental compatibility. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs

    NASA Technical Reports Server (NTRS)

    Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.

    2014-01-01

    This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.

  19. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  20. A progress report on estuary modeling by the finite-element method

    USGS Publications Warehouse

    Gray, William G.

    1978-01-01

    Various schemes are investigated for finite-element modeling of two-dimensional surface-water flows. The first schemes investigated combine finite-element spatial discretization with split-step time stepping schemes that have been found useful in finite-difference computations. Because of the large number of numerical integrations performed in space and the large sparse matrices solved, these finite-element schemes were found to be economically uncompetitive with finite-difference schemes. A very promising leapfrog scheme is proposed which, when combined with a novel very fast spatial integration procedure, eliminates the need to solve any matrices at all. Additional problems attacked included proper propagation of waves and proper specification of the normal flow-boundary condition. This report indicates work in progress and does not come to a definitive conclusion as to the best approach for finite-element modeling of surface-water problems. The results presented represent findings obtained between September 1973 and July 1976. (Woodard-USGS)

  1. Studying the Global Bifurcation Involving Wada Boundary Metamorphosis by a Method of Generalized Cell Mapping with Sampling-Adaptive Interpolation

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng

    In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.

  2. Evidence integration in model-based tree search

    PubMed Central

    Solway, Alec; Botvinick, Matthew M.

    2015-01-01

    Research on the dynamics of reward-based, goal-directed decision making has largely focused on simple choice, where participants decide among a set of unitary, mutually exclusive options. Recent work suggests that the deliberation process underlying simple choice can be understood in terms of evidence integration: Noisy evidence in favor of each option accrues over time, until the evidence in favor of one option is significantly greater than the rest. However, real-life decisions often involve not one, but several steps of action, requiring a consideration of cumulative rewards and a sensitivity to recursive decision structure. We present results from two experiments that leveraged techniques previously applied to simple choice to shed light on the deliberation process underlying multistep choice. We interpret the results from these experiments in terms of a new computational model, which extends the evidence accumulation perspective to multiple steps of action. PMID:26324932

  3. A high-order boundary integral method for surface diffusions on elastically stressed axisymmetric rods.

    PubMed

    Li, Xiaofan; Nie, Qing

    2009-07-01

    Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratures along with an extrapolation technique, leading to an arbitrarily high-order quadrature; in addition, a high-order (temporal) integration factor method, based on explicit representation of the mean curvature, is used to reduce the stability constraint on time-step. To apply this method to a periodic (in axial direction) and axi-symmetric elastically stressed cylinder, we also present a fast and accurate summation method for the periodic Green's functions of isotropic elasticity. Using the high-order boundary integral method, we demonstrate that in absence of elasticity the cylinder surface pinches in finite time at the axis of the symmetry and the universal cone angle of the pinching is found to be consistent with the previous studies based on a self-similar assumption. In the presence of elastic stress, we show that a finite time, geometrical singularity occurs well before the cylindrical solid collapses onto the axis of symmetry, and the angle of the corner singularity on the cylinder surface is also estimated.

  4. An Operator-Integration-Factor Splitting (OIFS) method for Incompressible Flows in Moving Domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Saumil S.; Fischer, Paul F.; Min, Misun

    In this paper, we present a characteristic-based numerical procedure for simulating incompressible flows in domains with moving boundaries. Our approach utilizes an operator-integration-factor splitting technique to help produce an effcient and stable numerical scheme. Using the spectral element method and an arbitrary Lagrangian-Eulerian formulation, we investigate flows where the convective acceleration effects are non-negligible. Several examples, ranging from laminar to turbulent flows, are considered. Comparisons with a standard, semi-implicit time-stepping procedure illustrate the improved performance of the scheme.

  5. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.

  6. Envisioning a Future of Computational Geoscience in a Data Rich Semantic World

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Elag, M.; Jiang, P.; Marini, L.

    2015-12-01

    Advances in observational systems and reduction in their cost are allowing us to explore, monitor, and digitally represent our environment in unprecedented details and over large areas. Low cost in situ sensors, unmanned autonomous vehicles, imaging technologies, and other new observational approaches along with airborne and space borne systems are allowing us to measure nearly everything, almost everywhere, and at almost all the time. Under the aegis of observatories they are enabling an integrated view across space and time scales ranging from storms to seasons to years and, in some cases, decades. Rapid increase in the convergence of computational, communication and information systems and their inter-operability through advances in technologies such as semantic web can provide opportunities to further facilitate fusion and synthesis of heterogeneous measurements with knowledge systems. This integration can enable us to break disciplinary boundaries and bring sensor data directly to desktop or handheld devices. We describe CyberInfrastructure effort that is being developed through projects such as Earthcube Geosemantics (http://geosemantics.hydrocomplexity.net), (SEAD (http://sead-data.net/), and Browndog (http://browndog.ncsa.illinois.edu/)s o that data across all of earth science can be easily shared and integrated with models. This also includes efforts to enable models to become interoperable among themselves and with data using technologies that enable human-out-of-the-loop integration. Through such technologies our ability to use real time information for decision-making and scientific investigations will increase multifold. The data goes through a sequence of steps, often iterative, from collection to long-term preservation. Similarly the scientific investigation and associated outcomes are composed of a number of iterative steps from problem identification to solutions. However, the integration between these two pathways is rather limited. We describe characteristics of new technologies that are needed to bring these processes together in the near future to significantly reduce the latency between data, science, and agile and informed actions that support sustainability.

  7. Sensi-steps: Using Patient-Generated Data to Prevent Post-stroke Falls

    PubMed Central

    Smith, Angela; Ng, Ada; Burgess, Eleanor R.; Weingarten, Noah; Pacheco, Jennifer A.

    2017-01-01

    We present Sensi-steps, an application using patient-generated data (PGD) to prevent falls for geriatric and especially poststroke patients. The Sensi-steps tool incorporates a wearable wrist device, pedometer, pressure and proximity sensors, and tablet. PGD collection occurs through Timed Up and Go (TUG) tests and collection of physiological data, which is integrated into the EHR. Fall risk factor active tracking encourages new ways of shared decision-making between patients, caregivers, and practitioners. PGD will be managed at the primary care nurse or Care Manager level (see 3-tier PGD service proposal), presenting a novel way to incorporate PGD into clinical decision-support systems. We expect our solution to be easier to use routinely by the patient at home than other fall risk tracking solutions. Sensi-steps has the potential to improve patient care, help patients make informed decisions, and help clinicians understand patient-generated, environmental, and lifestyle information to deliver personalized, preventative healthcare.

  8. Storybridging: Four steps for constructing effective health narratives

    PubMed Central

    Boeijinga, Anniek; Hoeken, Hans; Sanders, José

    2017-01-01

    Objective: To develop a practical step-by-step approach to constructing narrative health interventions in response to the mixed results and wide diversity of narratives used in health-related narrative persuasion research. Method: Development work was guided by essential narrative characteristics as well as principles enshrined in the Health Action Process Approach. Results: The ‘storybridging’ method for constructing health narratives is described as consisting of four concrete steps: (a) identifying the stage of change, (b) identifying the key elements, (c) building the story, and (d) pre-testing the story. These steps are illustrated by means of a case study in which an effective narrative health intervention was developed for Dutch truck drivers: a high-risk, underprivileged occupational group. Conclusion: Although time and labour intensive, the Storybridging approach suggests integrating the target audience as an important stakeholder throughout the development process. Implications and recommendations are provided for health promotion targeting truck drivers specifically and for constructing narrative health interventions in general. PMID:29276232

  9. Effects of Imperfect Dynamic Clamp: Computational and Experimental Results

    PubMed Central

    Bettencourt, Jonathan C.; Lillis, Kyle P.; White, John A.

    2008-01-01

    In the dynamic clamp technique, a typically nonlinear feedback system delivers electrical current to an excitable cell that represents the actions of “virtual” ion channels (e.g., channels that are gated by local membrane potential or by electrical activity in neighboring biological or virtual neurons). Since the conception of this technique, there have been a number of different implementations of dynamic clamp systems, each with differing levels of flexibility and performance. Embedded hardware-based systems typically offer feedback that is very fast and precisely timed, but these systems are often expensive and sometimes inflexible. PC-based systems, on the other hand, allow the user to write software that defines an arbitrarily complex feedback system, but real-time performance in PC-based systems can be deteriorated by imperfect real-time performance. Here we systematically evaluate the performance requirements for artificial dynamic clamp knock-in of transient sodium and delayed rectifier potassium conductances. Specifically we examine the effects of controller time step duration, differential equation integration method, jitter (variability in time step), and latency (the time lag from reading inputs to updating outputs). Each of these control system flaws is artificially introduced in both simulated and real dynamic clamp experiments. We demonstrate that each of these errors affect dynamic clamp accuracy in a way that depends on the time constants and stiffness of the differential equations being solved. In simulations, time steps above 0.2 ms lead to catastrophic alteration of spike shape, but the frequency-vs.-current relationship is much more robust. Latency (the part of the time step that occurs between measuring membrane potential and injecting re-calculated membrane current) is a crucial factor as well. Experimental data are substantially more sensitive to inaccuracies than simulated data. PMID:18076999

  10. A numerical method for computing unsteady 2-D boundary layer flows

    NASA Technical Reports Server (NTRS)

    Krainer, Andreas

    1988-01-01

    A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.

  11. Ab initio molecular dynamics with nuclear quantum effects at classical cost: Ring polymer contraction for density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsalek, Ondrej; Markland, Thomas E., E-mail: tmarkland@stanford.edu

    Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding asmore » a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.« less

  12. Unmanned Air Vehicle -Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fred Oppel, SNL 06134

    2013-04-17

    This package contains modules that model the mobility of systems such as helicopters and fixed wing flying in the air. This package currently models first order physics - basically a velocity integrator. UAV mobility uses an internal clock to maintain stable, high-fidelity simulations over large time steps This package depends on interface that reside in the Mobility package.

  13. Integrated Project Management: A Case Study in Integrating Cost, Schedule, Technical, and Risk Areas

    NASA Technical Reports Server (NTRS)

    Smith, Greg

    2004-01-01

    This viewgraph presentation describes a case study as a model for integrated project management. The ISS Program Office (ISSPO) developed replacement fluid filtration cartridges in house for the International Space Station (ISS). The presentation includes a step-by-step procedure and organizational charts for how the fluid filtration problem was approached.

  14. Study of monolithic integrated solar blind GaN-based photodetectors

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Zhang, Yan; Li, Xiaojuan; Xie, Jing; Wang, Jiqiang; Li, Xiangyang

    2018-02-01

    Monolithic integrated solar blind devices on the GaN-based epilayer, which can directly readout voltage signal, were fabricated and studied. Unlike conventional GaN-based photodiodes, the integrated devices can finish those steps: generation, accumulation of carriers and conversion of carriers to voltage. In the test process, the resetting voltage was square wave with the frequency of 15 and 110 Hz, its maximal voltage of ˜2.5 V. Under LEDs illumination, the maximum of voltage swing is about 2.5 V, and the rise time of voltage swing from 0 to 2.5 V is only about 1.6 ms. However, in dark condition, the node voltage between detector and capacitance nearly decline to zero with time when the resetting voltage was equal to zero. It is found that the leakage current in the circuit gives rise to discharge of the integrated charge. Storage mode operation can offer gain, which is advantage to detection of weak photo signal.

  15. The dE/dt and E Waveforms Radiated by Leader Steps Just Before the First Return Stroke in Cloud-to-Ocean Lightning

    NASA Astrophysics Data System (ADS)

    Krider, E. P.; Baffou, G.; Murray, N. D.; Willett, J. C.

    2004-12-01

    We have analyzed the shapes and other characteristics of the electric field, E, and dE/dt waveforms that were radiated by leader steps just before the first return stroke in cloud-to-ocean lightning. dE/dt waveforms were recorded using an 8-bit digitizer sampling at 100 MHz, and an integrated waveform, Eint, was computed by numerically integrating dE/dt and comparing the result with an analog E waveform digitized at 10 MHz. All signals were recorded under conditions where the lightning locations were known and there was minimal distortion in the fields due to the effects of ground-wave propagation. The dE/dt waveforms radiated by leader steps tend to fall into three categories: (1) "simple" - an isolated negative peak that is immediately followed by a positive overshoot (where negative polarity follows the normal physics convention), (2) "double" - two simple waveforms that occur at almost the same time, and (3) "burst" - a complex cluster of pulses with a total duration of about one microsecond. In this paper, we will give examples of each of these waveform types, and we will summarize their characteristics on a submicrosecond time-scale. For example, in an interval starting 9 μ s before to 4 μ s before the largest, negative (dominant) peak in dE/dt peak in the return stroke, 131 first strokes produced a total of 296 impulses with a peak amplitude greater than 10% of the dominant peak, and the average amplitude of these pulses was 0.21 of the dominant peak. The last leader step in a 12 μ s interval before the dominant peak was a simple waveform in 51 first strokes, and in these cases, the average time-interval between the peak dE/dt of the step and the dominant peak of the stroke was 5.8 ± 1.7 μ s, a value that is in good agreement with prior measurements. The median full-width-at-half-maximum (FWHM) of 274 simple Eint signatures was 141 ns, and the associated mean and standard deviation were 187 ± 131 ns.

  16. A Team-Based Process for Designing Comprehensive, Integrated, Three-Tiered (CI3T) Models of Prevention: How Does My School-Site Leadership Team Design a CI3T Model?

    ERIC Educational Resources Information Center

    Lane, Kathleen Lynne; Oakes, Wendy Peia; Jenkins, Abbie; Menzies, Holly Mariah; Kalberg, Jemma Robertson

    2014-01-01

    Comprehensive, integrated, three-tiered models are context specific and developed by school-site teams according to the core values held by the school community. In this article, the authors provide a step-by-step, team-based process for designing comprehensive, integrated, three-tiered models of prevention that integrate academic, behavioral, and…

  17. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  18. Algorithmically scalable block preconditioner for fully implicit shallow-water equations in CAM-SE

    DOE PAGES

    Lott, P. Aaron; Woodward, Carol S.; Evans, Katherine J.

    2014-10-19

    Performing accurate and efficient numerical simulation of global atmospheric climate models is challenging due to the disparate length and time scales over which physical processes interact. Implicit solvers enable the physical system to be integrated with a time step commensurate with processes being studied. The dominant cost of an implicit time step is the ancillary linear system solves, so we have developed a preconditioner aimed at improving the efficiency of these linear system solves. Our preconditioner is based on an approximate block factorization of the linearized shallow-water equations and has been implemented within the spectral element dynamical core within themore » Community Atmospheric Model (CAM-SE). Furthermore, in this paper we discuss the development and scalability of the preconditioner for a suite of test cases with the implicit shallow-water solver within CAM-SE.« less

  19. HARPA: A versatile three-dimensional Hamiltonian ray-tracing program for acoustic waves in the atmosphere above irregular terrain

    NASA Astrophysics Data System (ADS)

    Jones, R. M.; Riley, J. P.; Georges, T. M.

    1986-08-01

    The modular FORTRAN 77 computer program traces the three-dimensional paths of acoustic rays through continuous model atmospheres by numerically integrating Hamilton's equations (a differential expression of Fermat's principle). The user specifies an atmospheric model by writing closed-form formulas for its three-dimensional wind and temperature (or sound speed) distribution, and by defining the height of the reflecting terrain vs. geographic latitude and longitude. Some general-purpose models are provided, or users can readily design their own. In addition to computing the geometry of each raypath, HARPA can calculate pulse travel time, phase time, Doppler shift (if the medium varies in time), absorption, and geometrical path length. The program prints a step-by-step account of a ray's progress. The 410-page documentation describes the ray-tracing equations and the structure of the program, and provides complete instructions, illustrated by a sample case.

  20. Accelerating Time Integration for the Shallow Water Equations on the Sphere Using GPUs

    DOE PAGES

    Archibald, R.; Evans, K. J.; Salinger, A.

    2015-06-01

    The push towards larger and larger computational platforms has made it possible for climate simulations to resolve climate dynamics across multiple spatial and temporal scales. This direction in climate simulation has created a strong need to develop scalable timestepping methods capable of accelerating throughput on high performance computing. This study details the recent advances in the implementation of implicit time stepping of the spectral element dynamical core within the United States Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) on graphical processing units (GPU) based machines. We demonstrate how solvers in the Trilinos project are interfaced with ACMEmore » and GPU kernels to increase computational speed of the residual calculations in the implicit time stepping method for the atmosphere dynamics. We demonstrate the optimization gains and data structure reorganization that facilitates the performance improvements.« less

  1. A Classroom-Based Physical Activity Intervention for Urban Kindergarten and First-Grade Students: A Feasibility Study

    PubMed Central

    Wylie-Rosett, Judith; Kim, Mimi; Ozuah, Philip O.

    2015-01-01

    Abstract Background: Urban elementary schools in minority communities with high obesity prevalence may have limited resources for physical education (PE) to achieve daily activity recommendations. Little is known whether integrating physical activity (PA) into classrooms can increase activity levels of students attending such schools. Methods: We conducted a cluster randomized, controlled trial among kindergarten and first-grade students from four Bronx, New York, schools to determine feasibility and impact of a classroom-based intervention on students' PA levels. Students in two intervention schools received the Children's Hospital at Montefiore Joining Academics and Movement (CHAM JAM), an audio CD consisting of 10-minute, education-focused aerobic activities led by teachers three times a day. PA was objectively measured by pedometer. Each subject wore a sealed pedometer during the 6-hour school day for 5 consecutive days at baseline (Time 1) and 8 weeks postintervention (Time 2). Hierarchical linear models were fit to evaluate differences in mean number of steps between the two groups. Results: A total of 988 students participated (intervention group, n=500; control group, n=488). There was no significant difference at baseline between the two groups on mean number of steps (2581 [standard deviation (SD), 1284] vs. 2476 [SD, 1180]; P=0.71). Eight weeks post–CHAM JAM, intervention group students took significantly greater mean number of steps than controls (2839 [SD, 1262] vs. 2545 [SD, 1153]; P=0.0048) after adjusting for baseline number of steps and other covariates (grade, gender, recess, and PE class). CHAM JAM was equally effective in gender, grade level, and BMI subgroups. Conclusions: CHAM JAM significantly increased school-based PA among kindergarten and first-grade students in inner-city schools. This approach holds promise as a cost-effective means to integrate the physical and cognitive benefits of PA into high-risk schools. PMID:25747719

  2. A 2-DOF model of an elastic rocket structure excited by a follower force

    NASA Astrophysics Data System (ADS)

    Brejão, Leandro F.; da Fonseca Brasil, Reyolando Manoel L. R.

    2017-10-01

    We present a two degree of freedom model of an elastic rocket structure excited by the follower force given by the motor thrust that is supposed to be always in the direction of the tangent to the deformed shape of the device at its lower tip. The model comprises two massless rigid pinned bars, initially in vertical position, connected by rotational springs. Lumped masses and dampers are considered at the connections. The generalized coordinates are the angular displacements of the bars with respect to the vertical. We derive the equations of motion via Lagrange’s equations and simulate its time evolution using Runge-Kutta 4th order time step-by-step numerical integration algorithm. Results indicate possible occurrence of stable and unstable vibrations, such as limit cycles.

  3. Integrated payload and mission planning, phase 3. Volume 2: Logic/Methodology for preliminary grouping of spacelab and mixed cargo payloads

    NASA Technical Reports Server (NTRS)

    Rodgers, T. E.; Johnson, J. F.

    1977-01-01

    The logic and methodology for a preliminary grouping of Spacelab and mixed-cargo payloads is proposed in a form that can be readily coded into a computer program by NASA. The logic developed for this preliminary cargo grouping analysis is summarized. Principal input data include the NASA Payload Model, payload descriptive data, Orbiter and Spacelab capabilities, and NASA guidelines and constraints. The first step in the process is a launch interval selection in which the time interval for payload grouping is identified. Logic flow steps are then taken to group payloads and define flight configurations based on criteria that includes dedication, volume, area, orbital parameters, pointing, g-level, mass, center of gravity, energy, power, and crew time.

  4. R3D: Reduction Package for Integral Field Spectroscopy

    NASA Astrophysics Data System (ADS)

    Sánchez, Sebastián. F.

    2011-06-01

    R3D was developed to reduce fiber-based integral field spectroscopy (IFS) data. The package comprises a set of command-line routines adapted for each of these steps, suitable for creating pipelines. The routines have been tested against simulations, and against real data from various integral field spectrographs (PMAS, PPAK, GMOS, VIMOS and INTEGRAL). Particular attention is paid to the treatment of cross-talk. R3D unifies the reduction techniques for the different IFS instruments to a single one, in order to allow the general public to reduce different instruments data in an homogeneus, consistent and simple way. Although still in its prototyping phase, it has been proved to be useful to reduce PMAS (both in the Larr and the PPAK modes), VIMOS and INTEGRAL data. The current version has been coded in Perl, using PDL, in order to speed-up the algorithm testing phase. Most of the time critical algorithms have been translated to C[float=][/float], and it is our intention to translate all of them. However, even in this phase R3D is fast enough to produce valuable science frames in reasonable time.

  5. iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM

    PubMed Central

    Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.

    2011-01-01

    iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445

  6. An integral equation formulation for rigid bodies in Stokes flow in three dimensions

    NASA Astrophysics Data System (ADS)

    Corona, Eduardo; Greengard, Leslie; Rachh, Manas; Veerapaneni, Shravan

    2017-03-01

    We present a new derivation of a boundary integral equation (BIE) for simulating the three-dimensional dynamics of arbitrarily-shaped rigid particles of genus zero immersed in a Stokes fluid, on which are prescribed forces and torques. Our method is based on a single-layer representation and leads to a simple second-kind integral equation. It avoids the use of auxiliary sources within each particle that play a role in some classical formulations. We use a spectrally accurate quadrature scheme to evaluate the corresponding layer potentials, so that only a small number of spatial discretization points per particle are required. The resulting discrete sums are computed in O (n) time, where n denotes the number of particles, using the fast multipole method (FMM). The particle positions and orientations are updated by a high-order time-stepping scheme. We illustrate the accuracy, conditioning and scaling of our solvers with several numerical examples.

  7. Integrated Modeling of Time Evolving 3D Kinetic MHD Equilibria and NTV Torque

    NASA Astrophysics Data System (ADS)

    Logan, N. C.; Park, J.-K.; Grierson, B. A.; Haskey, S. R.; Nazikian, R.; Cui, L.; Smith, S. P.; Meneghini, O.

    2016-10-01

    New analysis tools and integrated modeling of plasma dynamics developed in the OMFIT framework are used to study kinetic MHD equilibria evolution on the transport time scale. The experimentally observed profile dynamics following the application of 3D error fields are described using a new OMFITprofiles workflow that directly addresses the need for rapid and comprehensive analysis of dynamic equilibria for next-step theory validation. The workflow treats all diagnostic data as fundamentally time dependent, provides physics-based manipulations such as ELM phase data selection, and is consistent across multiple machines - including DIII-D and NSTX-U. The seamless integration of tokamak data and simulation is demonstrated by using the self-consistent kinetic EFIT equilibria and profiles as input into 2D particle, momentum and energy transport calculations using TRANSP as well as 3D kinetic MHD equilibrium stability and neoclassical transport modeling using General Perturbed Equilibrium Code (GPEC). The result is a smooth kinetic stability and NTV torque evolution over transport time scales. Work supported by DE-AC02-09CH11466.

  8. 12-step affiliation and attendance following treatment for comorbid substance dependence and depression: a latent growth curve mediation model.

    PubMed

    Worley, Matthew J; Tate, Susan R; McQuaid, John R; Granholm, Eric L; Brown, Sandra A

    2013-01-01

    Among substance-dependent individuals, comorbid major depressive disorder (MDD) is associated with greater severity and poorer treatment outcomes, but little research has examined mediators of posttreatment substance use outcomes within this population. Using latent growth curve models, the authors tested relationships between individual rates of change in 12-step involvement and substance use, utilizing posttreatment follow-up data from a trial of group Twelve-Step Facilitation (TSF) and integrated cognitive-behavioral therapy (ICBT) for veterans with substance dependence and MDD. Although TSF patients were higher on 12-step affiliation and meeting attendance at end-of-treatment as compared with ICBT, they also experienced significantly greater reductions in these variables during the year following treatment, ending at similar levels as ICBT. Veterans in TSF also had significantly greater increases in drinking frequency during follow-up, and this group difference was mediated by their greater reductions in 12-step affiliation and meeting attendance. Patients with comorbid depression appear to have difficulty sustaining high levels of 12-step involvement after the conclusion of formal 12-step interventions, which predicts poorer drinking outcomes over time. Modifications to TSF and other formal 12-step protocols or continued therapeutic contact may be necessary to sustain 12-step involvement and reduced drinking for patients with substance dependence and MDD.

  9. 49 CFR 40.43 - What steps must operators of collection sites take to protect the security and integrity of urine...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to protect the security and integrity of urine collections? 40.43 Section 40.43 Transportation Office... PROGRAMS Collection Sites, Forms, Equipment and Supplies Used in DOT Urine Collections § 40.43 What steps must operators of collection sites take to protect the security and integrity of urine collections? (a...

  10. 49 CFR 40.43 - What steps must operators of collection sites take to protect the security and integrity of urine...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... to protect the security and integrity of urine collections? 40.43 Section 40.43 Transportation Office... PROGRAMS Collection Sites, Forms, Equipment and Supplies Used in DOT Urine Collections § 40.43 What steps must operators of collection sites take to protect the security and integrity of urine collections? (a...

  11. 49 CFR 40.43 - What steps must operators of collection sites take to protect the security and integrity of urine...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... to protect the security and integrity of urine collections? 40.43 Section 40.43 Transportation Office... PROGRAMS Collection Sites, Forms, Equipment and Supplies Used in DOT Urine Collections § 40.43 What steps must operators of collection sites take to protect the security and integrity of urine collections? (a...

  12. 49 CFR 40.43 - What steps must operators of collection sites take to protect the security and integrity of urine...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... to protect the security and integrity of urine collections? 40.43 Section 40.43 Transportation Office... PROGRAMS Collection Sites, Forms, Equipment and Supplies Used in DOT Urine Collections § 40.43 What steps must operators of collection sites take to protect the security and integrity of urine collections? (a...

  13. 49 CFR 40.43 - What steps must operators of collection sites take to protect the security and integrity of urine...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... to protect the security and integrity of urine collections? 40.43 Section 40.43 Transportation Office... PROGRAMS Collection Sites, Forms, Equipment and Supplies Used in DOT Urine Collections § 40.43 What steps must operators of collection sites take to protect the security and integrity of urine collections? (a...

  14. High throughput wafer defect monitor for integrated metrology applications in photolithography

    NASA Astrophysics Data System (ADS)

    Rao, Nagaraja; Kinney, Patrick; Gupta, Anand

    2008-03-01

    The traditional approach to semiconductor wafer inspection is based on the use of stand-alone metrology tools, which while highly sensitive, are large, expensive and slow, requiring inspection to be performed off-line and on a lot sampling basis. Due to the long cycle times and sparse sampling, the current wafer inspection approach is not suited to rapid detection of process excursions that affect yield. The semiconductor industry is gradually moving towards deploying integrated metrology tools for real-time "monitoring" of product wafers during the manufacturing process. Integrated metrology aims to provide end-users with rapid feedback of problems during the manufacturing process, and the benefit of increased yield, and reduced rework and scrap. The approach of monitoring 100% of the wafers being processed requires some trade-off in sensitivity compared to traditional standalone metrology tools, but not by much. This paper describes a compact, low-cost wafer defect monitor suitable for integrated metrology applications and capable of detecting submicron defects on semiconductor wafers at an inspection rate of about 10 seconds per wafer (or 360 wafers per hour). The wafer monitor uses a whole wafer imaging approach to detect defects on both un-patterned and patterned wafers. Laboratory tests with a prototype system have demonstrated sensitivity down to 0.3 µm on un-patterned wafers and down to 1 µm on patterned wafers, at inspection rates of 10 seconds per wafer. An ideal application for this technology is preventing photolithography defects such as "hot spots" by implementing a wafer backside monitoring step prior to exposing wafers in the lithography step.

  15. Mixed-mode ion exchange-based integrated proteomics technology for fast and deep plasma proteome profiling.

    PubMed

    Xue, Lu; Lin, Lin; Zhou, Wenbin; Chen, Wendong; Tang, Jun; Sun, Xiujie; Huang, Peiwu; Tian, Ruijun

    2018-06-09

    Plasma proteome profiling by LC-MS based proteomics has drawn great attention recently for biomarker discovery from blood liquid biopsy. Due to standard multi-step sample preparation could potentially cause plasma protein degradation and analysis variation, integrated proteomics sample preparation technologies became promising solution towards this end. Here, we developed a fully integrated proteomics sample preparation technology for both fast and deep plasma proteome profiling under its native pH. All the sample preparation steps, including protein digestion and two-dimensional fractionation by both mixed-mode ion exchange and high-pH reversed phase mechanism were integrated into one spintip device for the first time. The mixed-mode ion exchange beads design achieved the sample loading at neutral pH and protein digestion within 30 min. Potential sample loss and protein degradation by pH changing could be voided. 1 μL of plasma sample with depletion of high abundant proteins was processed by the developed technology with 12 equally distributed fractions and analyzed with 12 h of LC-MS gradient time, resulting in the identification of 862 proteins. The combination of the Mixed-mode-SISPROT and data-independent MS method achieved fast plasma proteome profiling in 2 h with high identification overlap and quantification precision for a proof-of-concept study of plasma samples from 5 healthy donors. We expect that the Mixed-mode-SISPROT become a generally applicable sample preparation technology for clinical oriented plasma proteome profiling. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. 3 Steps to Developing a Tribal Integrated Waste Management Plan (IWMP)

    EPA Pesticide Factsheets

    An Integrated Waste Management Plan (IWMP) is the blueprint of a comprehensive waste management program. The steps to developing an IWMP are collect background data, map out the tribal IWMP framework, and write and implement the tribal IWMP.

  17. Integrating human stem cell expansion and neuronal differentiation in bioreactors

    PubMed Central

    Serra, Margarida; Brito, Catarina; Costa, Eunice M; Sousa, Marcos FQ; Alves, Paula M

    2009-01-01

    Background Human stem cells are cellular resources with outstanding potential for cell therapy. However, for the fulfillment of this application, major challenges remain to be met. Of paramount importance is the development of robust systems for in vitro stem cell expansion and differentiation. In this work, we successfully developed an efficient scalable bioprocess for the fast production of human neurons. Results The expansion of undifferentiated human embryonal carcinoma stem cells (NTera2/cl.D1 cell line) as 3D-aggregates was firstly optimized in spinner vessel. The media exchange operation mode with an inoculum concentration of 4 × 105 cell/mL was the most efficient strategy tested, with a 4.6-fold increase in cell concentration achieved in 5 days. These results were validated in a bioreactor where similar profile and metabolic performance were obtained. Furthermore, characterization of the expanded population by immunofluorescence microscopy and flow cytometry showed that NT2 cells maintained their stem cell characteristics along the bioreactor culture time. Finally, the neuronal differentiation step was integrated in the bioreactor process, by addition of retinoic acid when cells were in the middle of the exponential phase. Neurosphere composition was monitored and neuronal differentiation efficiency evaluated along the culture time. The results show that, for bioreactor cultures, we were able to increase significantly the neuronal differentiation efficiency by 10-fold while reducing drastically, by 30%, the time required for the differentiation process. Conclusion The culture systems developed herein are robust and represent one-step-forward towards the development of integrated bioprocesses, bridging stem cell expansion and differentiation in fully controlled bioreactors. PMID:19772662

  18. Cognitive and emotional factors associated with elective breast augmentation among young women.

    PubMed

    Moser, Stephanie E; Aiken, Leona S

    2011-01-01

    The purpose of this research was to propose and evaluate a psychosocial model of young women's intentions to obtain breast implants and the preparatory steps taken towards having breast implant surgery. The model integrated anticipated regret, descriptive norms and image norms from the media into the theory of planned behaviour (TPB). Focus groups (n = 58) informed development of measures of outcome expectancies, preparatory steps and normative influence. The model was tested and replicated among two samples of young women who had ever considered getting breast implants (n = 200, n = 152). Intentions and preparatory steps served as outcomes. Model constructs and outcomes were initially assessed; outcomes were re-assessed 11 weeks later. Evaluative attitudes and anticipated regret predicted intentions; in turn, intentions, along with descriptive norms, predicted subsequent preparatory steps. Perceived risk (susceptibility, severity) of negative medical consequences of breast implants predicted anticipated regret, which predicted evaluative attitudes. Intentions and preparatory steps exhibited interplay over time. This research provides the first comprehensive model predicting intentions and preparatory steps towards breast augmentation surgery. It supports the addition of anticipated regret to the TPB and suggests mutual influence between intentions and preparatory steps towards a final behavioural outcome.

  19. Migration of Dust Particles from Comet 2P Encke

    NASA Technical Reports Server (NTRS)

    Ipatov, S. I.

    2003-01-01

    We investigated the migration of dust particles under the gravitational influence of all planets (except for Pluto), radiation pressure, Poynting-Robertson drag and solar wind drag for Beta equal to 0.002, 0.004, 0.01, 0.05, 0.1, 0.2, and 0.4. For silicate particles such values of Beta correspond to diameters equal to about 200, 100, 40, 9, 4, 2, and 1 microns, respectively. We used the Bulirsh-Stoer method of integration, and the relative error per integration step was taken to be less than lo-'. Initial orbits of the particles were close to the orbit of Comet 2P Encke. We considered initial particles near perihelion (runs denoted as Delta tsub o, = 0), near aphelion (Delta tsub o, = 0.5), and also studied their initial positions when the comet moved for Pa/4 after perihelion passage (such runs are denoted as Delta tsub o, =i 0.25), where Pa is the period of the comet. Variations in time T when perihelion was passed was varied with a step 0.1 day for series 'S' and with a step 1 day for series 'L'. For each Beta we considered N = 101 particles for "S" runs and 150 particles for "L" runs.

  20. Integrating medical informatics into the medical undergraduate curriculum.

    PubMed

    Khonsari, L S; Fabri, P J

    1997-01-01

    The advent of healthcare reform and the rapid application of new technologies have resulted in a paradigm shift in medical practice. Integrating medical Informatics into the full spectrum of medical education is a viral step toward implementing this new instructional model, a step required for the understanding and practice of modern medicine. We have developed an informatics curriculum, a new educational paradigm, and an intranet-based teaching module which are designed to enhance adult-learning principles, life-long self education, and evidence-based critical thinking. Thirty two, fourth year medical students have participated in a one month, full time, independent study focused on but not limited to four topics: mastering the windows-based environment, understanding hospital based information management systems, developing competence in using the internet/intranet and world wide web/HTML, and experiencing distance communication and TeleVideo networks. Each student has completed a clinically relevant independent study project utilizing technology mastered during the course. This initial curriculum offering was developed in conjunction with faculty from the College of Medicine, College of Engineering, College of Education, College of Business, College of Public Health. Florida Center of Instructional Technology, James A. Haley Veterans Hospital, Moffitt Cancer Center, Tampa General Hospital, GTE, Westshore Walk-in Clinic (paperless office), and the Florida Engineering Education Delivery System. Our second step toward the distributive integration process was the introduction of Medical Informatics to first, second and third year medical students. To date, these efforts have focused on undergraduate medical education. Our next step is to offer workshops in Informatics to college of medicine faculty, to residents in post graduate training programs (GME), and ultimately as a method of distance learning in continuing medical education (CME).

  1. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

    PubMed Central

    Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930

  2. Finite time step and spatial grid effects in δf simulation of warm plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.

    2016-01-15

    This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less

  3. “SLIMPLECTIC” INTEGRATORS: VARIATIONAL INTEGRATORS FOR GENERAL NONCONSERVATIVE SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsang, David; Turner, Alec; Galley, Chad R.

    2015-08-10

    Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. Wemore » discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.« less

  4. Ideas for Future GPS Timing Improvements

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    Having recently met stringent criteria for full operational capability (FOC) certification, the Global Positioning System (GPS) now has higher customer expectations than ever before. In order to maintain customer satisfaction, and the meet the even high customer demands of the future, the GPS Master Control Station (MCS) must play a critical role in the process of carefully refining the performance and integrity of the GPS constellation, particularly in the area of timing. This paper will present an operational perspective on several ideas for improving timing in GPS. These ideas include the desire for improving MCS - US Naval Observatory (USNO) data connectivity, an improved GPS-Coordinated Universal Time (UTC) prediction algorithm, a more robust Kalman Filter, and more features in the GPS reference time algorithm (the GPS composite clock), including frequency step resolution, a more explicit use of the basic time scale equation, and dynamic clock weighting. Current MCS software meets the exceptional challenge of managing an extremely complex constellation of 24 navigation satellites. The GPS community will, however, always seek to improve upon this performance and integrity.

  5. Krylov Deferred Correction Accelerated Method of Lines Transpose for Parabolic Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, Jun; Jingfang, Huang

    2008-01-01

    In this paper, a new class of numerical methods for the accurate and efficient solutions of parabolic partial differential equations is presented. Unlike traditional method of lines (MoL), the new {\\bf \\it Krylov deferred correction (KDC) accelerated method of lines transpose (MoL^T)} first discretizes the temporal direction using Gaussian type nodes and spectral integration, and symbolically applies low-order time marching schemes to form a preconditioned elliptic system, which is then solved iteratively using Newton-Krylov techniques such as Newton-GMRES or Newton-BiCGStab method. Each function evaluation in the Newton-Krylov method is simply one low-order time-stepping approximation of the error by solving amore » decoupled system using available fast elliptic equation solvers. Preliminary numerical experiments show that the KDC accelerated MoL^T technique is unconditionally stable, can be spectrally accurate in both temporal and spatial directions, and allows optimal time-step sizes in long-time simulations.« less

  6. Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm

    NASA Astrophysics Data System (ADS)

    Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.

    2014-08-01

    This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.

  7. Simulation of localized heavy precipitation in South Korea on 20 June 2014: sensitivity test of integration time-step size and an effect of topographic resolution using WRF model

    NASA Astrophysics Data System (ADS)

    Roh, Joon-Woo; Jee, Joon-Bum; Lim, A.-Young; Choi, Young-Jean

    2015-04-01

    Korean warm-season rainfall, accounting for about three-fourths of the annual precipitation, is primarily caused by Changma front, which is a kind of the East Asian summer monsoon, and localized heavy rainfall with convective instability. Various physical mechanisms potentially exert influences on heavy precipitation over South Korea. Representatively, the middle latitude and subtropical weather fronts, associated with a quasi-stationary moisture convergence zone among varying air masses, make up one of the main rain-bearing synoptic scale systems. Localized heavy rainfall events in South Korea generally arise from mesoscale convective systems embedded in these synoptic scale disturbances along the Changma front or convective instabilities resulted from unstable air mass including the direct or indirect effect of typhoons. In recent years, torrential rainfalls, which are more than 30mm/hour of precipitation amount, in warm-season has increased threefold in Seoul, which is a metropolitan city in South Korea. In order to investigate multiple potential causes of warm-season localized heavy precipitation in South Korea, a localized heavy precipitation case took place on 20 June 2014 at Seoul. This case was mainly seen to be caused by short-wave trough, which is associated with baroclinic instability in the northwest of Korea, and a thermal low, which has high moist and warm air through analysis. This structure showed convective scale torrential rain was embedded in the dynamic and in the thermodynamic structures. In addition to, a sensitivity of rainfall amount and maximum rainfall location to the integration time-step sizes was investigated in the simulations of a localized heavy precipitation case using Weather Research and Forecasting model. The simulation of time-step sizes of 9-27s corresponding to a horizontal resolution of 4.5km and 1.5km varied slightly difference of the maximum rainfall amount. However, the sensitivity of spatial patterns and temporal variations in rainfall were relatively small for the time-step sizes. The effect of topography was also important in the localized heavy precipitation simulation.

  8. Text-based Analytics for Biosurveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, Lauren E.; Smith, William P.; Rounds, Jeremiah

    The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related tomore » biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when). The ability to prevent, mitigate, or control a biological threat depends on how quickly the threat is identified and characterized. Ensuring the timely delivery of data and analytics is an essential aspect of providing adequate situational awareness in the face of a disease outbreak. This chapter outlines an analytic pipeline for supporting an advanced early warning system that can integrate multiple data sources and provide situational awareness of potential and occurring disease situations. The pipeline, includes real-time automated data analysis founded on natural language processing (NLP), semantic concept matching, and machine learning techniques, to enrich content with metadata related to biosurveillance. Online news articles are presented as an example use case for the pipeline, but the processes can be generalized to any textual data. In this chapter, the mechanics of a streaming pipeline are briefly discussed as well as the major steps required to provide targeted situational awareness. The text-based analytic pipeline includes various processing steps as well as identifying article relevance to biosurveillance (e.g., relevance algorithm) and article feature extraction (who, what, where, why, how, and when).« less

  9. Welfare Integrity Act of 2011

    THOMAS, 112th Congress

    Rep. Fincher, Stephen Lee [R-TN-8

    2011-10-13

    House - 10/17/2011 Referred for a period ending not later than October 17, 2011, (or for a later time if the Chairman so designates) to the Subcommittee on Human Resources, in each case for consideration of such provisions as fall within the jurisdiction of the subcommittee concerned. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  10. Incorporation of Chemical Contaminants into the Combined ICM/SEDZLJ Models

    DTIC Science & Technology

    2012-03-01

    concentration of toxicant 1 in water column (g m-3) Δt = model integration time step (s) The deposition of toxicant 2 is: ERDC/EL TR-12-6 13 Δ Dpoc ...D Fpw TOX w t POC     1 2 (10) in which: Dpoc = deposition of labile and refractory particulate organic carbon (g m-2) POC = labile plus

  11. Multi-species Management Using Modeling and Decision Theory Applications to Integrated Natural Resources Management Planning

    DTIC Science & Technology

    2008-06-01

    or just habitat area . They used linear interpolation to derive maps for each time step in the population model and population dynamics were...Metapopulation Map ............................................................................................................... 20  Figure 12. Habitat...Stephen’s kangaroo rat (SKR). In some areas of coastal sage scrub habitat short fire return intervals make the habitat suitable for the SKR while

  12. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model

    PubMed Central

    van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.

    2018-01-01

    The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620

  13. MSFC Stream Model Preliminary Results: Modeling Recent Leonid and Perseid Encounters

    NASA Technical Reports Server (NTRS)

    Cooke, William J.; Moser, Danielle E.

    2004-01-01

    The cometary meteoroid ejection model of Jones and Brown (1996b) was used to simulate ejection from comets 55P/Tempel-Tuttle during the last 12 revolutions, and the last 9 apparitions of 109P/Swift-Tuttle. Using cometary ephemerides generated by the Jet Propulsion Laboratory s (JPL) HORIZONS Solar System Data and Ephemeris Computation Service, two independent ejection schemes were simulated. In the first case, ejection was simulated in 1 hour time steps along the comet s orbit while it was within 2.5 AU of the Sun. In the second case, ejection was simulated to occur at the hour the comet reached perihelion. A 4th order variable step-size Runge-Kutta integrator was then used to integrate meteoroid position and velocity forward in time, accounting for the effects of radiation pressure, Poynting-Robertson drag, and the gravitational forces of the planets, which were computed using JPL s DE406 planetary ephemerides. An impact parameter was computed for each particle approaching the Earth to create a flux profile, and the results compared to observations of the 1998 and 1999 Leonid showers, and the 1993 and 2004 Perseids.

  14. MSFC Stream Model Preliminary Results: Modeling Recent Leonid and Perseid Encounters

    NASA Astrophysics Data System (ADS)

    Moser, Danielle E.; Cooke, William J.

    2004-12-01

    The cometary meteoroid ejection model of Jones and Brown [ Physics, Chemistry, and Dynamics of Interplanetary Dust, ASP Conference Series 104 (1996b) 137] was used to simulate ejection from comets 55P/Tempel-Tuttle during the last 12 revolutions, and the last 9 apparitions of 109P/Swift-Tuttle. Using cometary ephemerides generated by the Jet Propulsion Laboratory’s (JPL) HORIZONS Solar System Data and Ephemeris Computation Service, two independent ejection schemes were simulated. In the first case, ejection was simulated in 1 h time steps along the comet’s orbit while it was within 2.5 AU of the Sun. In the second case, ejection was simulated to occur at the hour the comet reached perihelion. A 4th order variable step-size Runge Kutta integrator was then used to integrate meteoroid position and velocity forward in time, accounting for the effects of radiation pressure, Poynting Robertson drag, and the gravitational forces of the planets, which were computed using JPL’s DE406 planetary ephemerides. An impact parameter (IP) was computed for each particle approaching the Earth to create a flux profile, and the results compared to observations of the 1998 and 1999 Leonid showers, and the 1993 and 2004 Perseids.

  15. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  16. Multilevel ensemble Kalman filtering

    DOE PAGES

    Hoel, Hakon; Law, Kody J. H.; Tempone, Raul

    2016-06-14

    This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.

  17. Production of long-term global water vapor and liquid water data set using ultra-fast methods to assimilate multi-satellite and radiosonde observations

    NASA Technical Reports Server (NTRS)

    Vonderhaar, T. H.; Reinke, Donald L.; Randel, David L.; Stephens, Graeme L.; Combs, Cynthia L.; Greenwald, Thomas J.; Ringerud, Mark A.; Wittmeyer, Ian L.

    1993-01-01

    During the next decade, many programs and experiments under the Global Energy and Water Cycle Experiment (GEWEX) will utilize present day and future data sets to improve our understanding of the role of moisture in climate, and its interaction with other variables such as clouds and radiation. An important element of GEWEX will be the GEWEX Water Vapor Project (GVaP), which will eventually initiate a routine, real-time assimilation of the highest quality, global water vapor data sets including information gained from future data collection systems, both ground and space based. The comprehensive global water vapor data set being produced by METSAT Inc. uses a combination of ground-based radiosonde data, and infrared and microwave satellite retrievals. This data is needed to provide the desired foundation from which future GEWEX-related research, such as GVaP, can build. The first year of this project was designed to use a combination of the best available atmospheric moisture data including: radiosonde (balloon/acft/rocket), HIRS/MSU (TOVS) retrievals, and SSM/I retrievals, to produce a one-year, global, high resolution data set of integrated column water vapor (precipitable water) with a horizontal resolution of 1 degree, and a temporal resolution of one day. The time period of this pilot product was to be det3ermined by the availability of all the input data sets. January 1988 through December 1988 were selected. In addition, a sample of vertically integrated liquid water content (LWC) was to be produced with the same temporal and spatial parameters. This sample was to be produced over ocean areas only. Three main steps are followed to produce a merged water vapor and liquid water product. Input data from Radiosondes, TOVS, and SSMI/I is quality checked in steps one and two. Processing is done in step two to generate individual total column water vapor and liquid water data sets. The third step, and final processing task, involves merging the individual output products to produce the integrated water vapor product. A final quality control is applied to the merged data sets.

  18. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  19. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  20. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  1. Toward a fully integrated neurostimulator with inductive power recovery front-end.

    PubMed

    Mounaïm, Fayçal; Sawan, Mohamad

    2012-08-01

    In order to investigate new neurostimulation strategies for micturition recovery in spinal cord injured patients, custom implantable stimulators are required to carry-on chronic animal experiments. However, higher integration of the neurostimulator becomes increasingly necessary for miniaturization purposes, power consumption reduction, and for increasing the number of stimulation channels. As a first step towards total integration, we present in this paper the design of a highly-integrated neurostimulator that can be assembled on a 21-mm diameter printed circuit board. The prototype is based on three custom integrated circuits fabricated in High-Voltage (HV) CMOS technology, and a low-power small-scale commercially available FPGA. Using a step-down approach where the inductive voltage is left free up to 20 V, the inductive power and data recovery front-end is fully integrated. In particular, the front-end includes a bridge rectifier, a 20-V voltage limiter, an adjustable series regulator (5 to 12 V), a switched-capacitor step-down DC/DC converter (1:3, 1:2, or 2:3 ratio), as well as data recovery. Measurements show that the DC/DC converter achieves more than 86% power efficiency while providing around 3.9-V from a 12-V input at 1-mA load, 1:3 conversion ratio, and 50-kHz switching frequency. With such efficiency, the proposed step-down inductive power recovery topology is more advantageous than its conventional step-up counterpart. Experimental results confirm good overall functionality of the system.

  2. Effect of Variations in IRU Integration Time Interval On Accuracy of Aqua Attitude Estimation

    NASA Technical Reports Server (NTRS)

    Natanson, G. A.; Tracewell, Dave

    2003-01-01

    During Aqua launch support, attitude analysts noticed several anomalies in Onboard Computer (OBC) rates and in rates computed by the ground Attitude Determination System (ADS). These included: 1) periodic jumps in the OBC pitch rate every 2 minutes; 2) spikes in ADS pitch rate every 4 minutes; 3) close agreement between pitch rates computed by ADS and those derived from telemetered OBC quaternions (in contrast to the step-wise pattern observed for telemetered OBC rates); 4) spikes of +/- 10 milliseconds in telemetered IRU integration time every 4 minutes (despite the fact that telemetered time tags of any two sequential IRU measurements were always 1 second apart from each other). An analysis presented in the paper explains this anomalous behavior by a small average offset of about 0.5 +/- 0.05 microsec in the time interval between two sequential accumulated angle measurements. It is shown that errors in the estimated pitch angle due to neglecting the aforementioned variations in the integration time interval by the OBC is within +/- 2 arcseconds. Ground attitude solutions are found to be accurate enough to see the effect of the variations on the accuracy of the estimated pitch angle.

  3. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  4. Co-evolving Physical and Biological Organization in Step-pool Channels: Experiments from a Restoration Reach on Wildcat Creek, California

    NASA Astrophysics Data System (ADS)

    Chin, A.; O'Dowd, A. P.; Mendez, P. K.; Velasco, K. Z.; Leventhal, R. D.; Storesund, R.; Laurencio, L. R.

    2014-12-01

    Step-pools are important features in fluvial systems. Through energy dissipation, step-pools provide stability in high-energy environments that otherwise may erode and degrade. Although research has focused on geomorphological aspects of step-pool channels, the ecological significance of step-pool streams is increasingly recognized. Step-pool streams often contain higher density and diversity of benthic macroinvertebrates and are critical habitats for organisms such as salmonids and tailed frogs. Step-pools are therefore increasingly used to restore eroding channels and improve ecological conditions. This paper addresses a restoration reach of Wildcat Creek in Berkeley, California that featured an installation of step-pools in 2012. The design framework recognized step-pool formation as a self-organizing process that produces a rhythmic morphology. After placing step particles at locations where step-pools are expected to form according to hydraulic theory, the self-organizing approach allowed fluvial processes to refine the rocks into adjusted sequences over time. In addition, a 30-meter "experimental" reach was created to explore the co-evolution of geomorphological and ecological characteristics. After constructing a plane bed channel, boulders and cobbles piled at the upstream end allowed natural flows to mobilize and sort them into step-pool sequences. Ground surveys and LiDAR recorded the development of step-pool sequences over several seasons. Concurrent sampling of benthic macroinvertebrates documented the formation of biological communities in conjunction with habitat. Biological sampling in an upstream reference reach provided a comparison with the restored reach over time. Results to date show an emergent step-pool channel with steps that segment the plane bed into initial step and pool habitats. Biological communities are beginning to form, showing more distinction among habitat types during some seasons, although they do not yet approach reference values at this stage of development. Research over longer timeframes is needed to reveal how biological and physical characteristics may co-organize toward an equilibrium landscape. Such integrated understanding will assist development of innovative restoration designs.

  5. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE PAGES

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-10-20

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  6. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  7. Space Station - An integrated approach to operational logistics support

    NASA Technical Reports Server (NTRS)

    Hosmer, G. J.

    1986-01-01

    Development of an efficient and cost effective operational logistics system for the Space Station will require logistics planning early in the program's design and development phase. This paper will focus on Integrated Logistics Support (ILS) Program techniques and their application to the Space Station program design, production and deployment phases to assure the development of an effective and cost efficient operational logistics system. The paper will provide the methodology and time-phased programmatic steps required to establish a Space Station ILS Program that will provide an operational logistics system based on planned Space Station program logistics support.

  8. Multistep integration formulas for the numerical integration of the satellite problem

    NASA Technical Reports Server (NTRS)

    Lundberg, J. B.; Tapley, B. D.

    1981-01-01

    The use of two Class 2/fixed mesh/fixed order/multistep integration packages of the PECE type for the numerical integration of the second order, nonlinear, ordinary differential equation of the satellite orbit problem. These two methods are referred to as the general and the second sum formulations. The derivation of the basic equations which characterize each formulation and the role of the basic equations in the PECE algorithm are discussed. Possible starting procedures are examined which may be used to supply the initial set of values required by the fixed mesh/multistep integrators. The results of the general and second sum integrators are compared to the results of various fixed step and variable step integrators.

  9. Analyse et caracterisation d'interactions fluide-structure instationnaires en grands deplacements

    NASA Astrophysics Data System (ADS)

    Cori, Jean-Francois

    Flapping wings for flying and oscillating fins for swimming stand out as the most complex yet efficient propulsion methods found in nature. Understanding the phenomena involved is a great challenge generating significant interests, especially in the growing field of Micro Air Vehicles. The thrust and lift are induced by oscillating foils thanks to a complex phenomenon of unsteady fluid-structure interaction (FSI). The aim of the dissertation is to develop an efficient CFD framework for simulating the FSI process involved in the propulsion or the power extraction of an oscillating flexible airfoil in a viscous incompressible flow. The numerical method relies on direct implicit monolithic formulation using high-order implicit time integrators. We use an Arbitrary Lagrangian Eulerian (ALE) formulation of the equations designed to satisfy the Geometric Conservation Law (GCL) and to guarantee that the high order temporal accuracy of the time integrators observed on fixed meshes is preserved on ALE deforming meshes. Hyperelastic structural Saint-Venant Kirchhoff model, viscous incompressible Navier-Stokes equations for the flow, Newton's law for the point mass and equilibrium equations at the interface form one large monolithic system. The fully implicit FSI approach uses coincidents nodes on the fluid-structure interface, so that loads, velocities and displacements are evaluated at the same location and at the same time. The problem is solved in an implicit manner using a Newton-Raphson pseudo-solid finite element approach. High-order implicit Runge-Kutta time integrators are implemented (up to 5th order) to improve the accuracy and reduce the computational cost. In this context of stiff interaction problems, the highly stable fully implicit one-step approach is an original alternative to traditional multistep or explicit one-step finite element approaches. The methodology has been verified with three different test-cases. Thorough time-step refinement studies for a rigid oscillating airfoil on deforming meshes, for flow induced vibrations of a flexible strip and for a self-propulsed flapping airfoil indicate that the stability of the proposed approach is always observed even with large time steps, spurious oscillations on the structure are avoided without any damping and the high order accuracy of the IRK schemes is maintained. We have applied our powerful FSI framework on three interesting applications, with a detailed dimensional analysis to obtain their characteristic parameters. Firstly, we have studied the vibrational characteristics of a well-documented fluid-structure interaction case : a flexible strip fixed behind a rigid square cylinder. Our results compare favorably with previous works. The accuracy of the IRK time integrators (even for the pressure field of incompressible flow), their unconditional stability and their non-dissipative nature produced results revealing new, never previously reported, higher frequency structural forces weakly coupled with the fluid. Secondly, we have explored the propulsive and power extraction characteristics of rigid and flexible flapping airfoils. For the power extraction, we found an excellent agreement with literature results. A parametric study indicates the optimal motion parameters to get high propulsive efficiencies. An optimal flexibility seems to improve power extraction efficiency. Finally, a survey on flapping propulsion has given initial results for a self-propulsed airfoil and has opened a new way of studying propulsive efficiency. (Abstract shortened by UMI.)

  10. Conceptual Models in Health Informatics Research: A Literature Review and Suggestions for Development

    PubMed Central

    2016-01-01

    Background Contributing to health informatics research means using conceptual models that are integrative and explain the research in terms of the two broad domains of health science and information science. However, it can be hard for novice health informatics researchers to find exemplars and guidelines in working with integrative conceptual models. Objectives The aim of this paper is to support the use of integrative conceptual models in research on information and communication technologies in the health sector, and to encourage discussion of these conceptual models in scholarly forums. Methods A two-part method was used to summarize and structure ideas about how to work effectively with conceptual models in health informatics research that included (1) a selective review and summary of the literature of conceptual models; and (2) the construction of a step-by-step approach to developing a conceptual model. Results The seven-step methodology for developing conceptual models in health informatics research explained in this paper involves (1) acknowledging the limitations of health science and information science conceptual models; (2) giving a rationale for one’s choice of integrative conceptual model; (3) explicating a conceptual model verbally and graphically; (4) seeking feedback about the conceptual model from stakeholders in both the health science and information science domains; (5) aligning a conceptual model with an appropriate research plan; (6) adapting a conceptual model in response to new knowledge over time; and (7) disseminating conceptual models in scholarly and scientific forums. Conclusions Making explicit the conceptual model that underpins a health informatics research project can contribute to increasing the number of well-formed and strongly grounded health informatics research projects. This explication has distinct benefits for researchers in training, research teams, and researchers and practitioners in information, health, and other disciplines. PMID:26912288

  11. Conceptual Models in Health Informatics Research: A Literature Review and Suggestions for Development.

    PubMed

    Gray, Kathleen; Sockolow, Paulina

    2016-02-24

    Contributing to health informatics research means using conceptual models that are integrative and explain the research in terms of the two broad domains of health science and information science. However, it can be hard for novice health informatics researchers to find exemplars and guidelines in working with integrative conceptual models. The aim of this paper is to support the use of integrative conceptual models in research on information and communication technologies in the health sector, and to encourage discussion of these conceptual models in scholarly forums. A two-part method was used to summarize and structure ideas about how to work effectively with conceptual models in health informatics research that included (1) a selective review and summary of the literature of conceptual models; and (2) the construction of a step-by-step approach to developing a conceptual model. The seven-step methodology for developing conceptual models in health informatics research explained in this paper involves (1) acknowledging the limitations of health science and information science conceptual models; (2) giving a rationale for one's choice of integrative conceptual model; (3) explicating a conceptual model verbally and graphically; (4) seeking feedback about the conceptual model from stakeholders in both the health science and information science domains; (5) aligning a conceptual model with an appropriate research plan; (6) adapting a conceptual model in response to new knowledge over time; and (7) disseminating conceptual models in scholarly and scientific forums. Making explicit the conceptual model that underpins a health informatics research project can contribute to increasing the number of well-formed and strongly grounded health informatics research projects. This explication has distinct benefits for researchers in training, research teams, and researchers and practitioners in information, health, and other disciplines.

  12. Photodiodes integration on a suspended ridge structure VOA using 2-step flip-chip bonding method

    NASA Astrophysics Data System (ADS)

    Kim, Seon Hoon; Kim, Tae Un; Ki, Hyun Chul; Kim, Doo Gun; Kim, Hwe Jong; Lim, Jung Woon; Lee, Dong Yeol; Park, Chul Hee

    2015-01-01

    In this works, we have demonstrated a VOA integrated with mPDs, based on silica-on-silicon PLC and flip-chip bonding technologies. The suspended ridge structure was applied to reduce the power consumption. It achieves the attenuation of 30dB in open loop operation with the power consumption of below 30W. We have applied two-step flipchip bonding method using passive alignment to perform high density multi-chip integration on a VOA with eutectic AuSn solder bumps. The average bonding strength of the two-step flip-chip bonding method was about 90gf.

  13. Fostering supportive learning environments in long-term care: the case of WIN A STEP UP.

    PubMed

    Craft Morgan, Jennifer; Haviland, Sara B; Woodside, M Allyson; Konrad, Thomas R

    2007-01-01

    The education of direct care workers (DCWs) is key to improving job quality and the quality of care in long-term care (LTC). This paper describes the successful integration of a supervisory training program into a continuing education intervention (WIN A STEP UP) for DCWs, identifies the factors that appear to influence the integration of the learning into practice, and discusses the implications for educators. The WIN A STEP UP program achieved its strongest results when the DCW curriculum was paired with Coaching Supervision. Attention to pre-training, training and post-training conditions is necessary to successfully integrate learning into practice in LTC.

  14. Generalized Green's function molecular dynamics for canonical ensemble simulations

    NASA Astrophysics Data System (ADS)

    Coluci, V. R.; Dantas, S. O.; Tewary, V. K.

    2018-05-01

    The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.

  15. The importance of age composition of 12-step meetings as a moderating factor in the relation between young adults' 12-step participation and abstinence.

    PubMed

    Labbe, Allison K; Greene, Claire; Bergman, Brandon G; Hoeppner, Bettina; Kelly, John F

    2013-12-01

    Participation in 12-step mutual help organizations (MHO) is a common continuing care recommendation for adults; however, little is known about the effects of MHO participation among young adults (i.e., ages 18-25 years) for whom the typically older age composition at meetings may serve as a barrier to engagement and benefits. This study examined whether the age composition of 12-step meetings moderated the recovery benefits derived from attending MHOs. Young adults (n=302; 18-24 years; 26% female; 94% White) enrolled in a naturalistic study of residential treatment effectiveness were assessed at intake, and 3, 6, and 12 months later on 12-step attendance, age composition of attended 12-step groups, and treatment outcome (Percent Days Abstinent [PDA]). Hierarchical linear models (HLM) tested the moderating effect of age composition on PDA concurrently and in lagged models controlling for confounds. A significant three-way interaction between attendance, age composition, and time was detected in the concurrent (p=0.002), but not lagged, model (b=0.38, p=0.46). Specifically, a similar age composition was helpful early post-treatment among low 12-step attendees, but became detrimental over time. Treatment and other referral agencies might enhance the likelihood of successful remission and recovery among young adults by locating and initially linking such individuals to age appropriate groups. Once engaged, however, it may be prudent to encourage gradual integration into the broader mixed-age range of 12-step meetings, wherein it is possible that older members may provide the depth and length of sober experience needed to carry young adults forward into long-term recovery. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. DSP-Based dual-polarity mass spectrum pattern recognition for bio-detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V; Coffee, K; Gard, E

    2006-04-21

    The Bio-Aerosol Mass Spectrometry (BAMS) instrument analyzes single aerosol particles using a dual-polarity time-of-flight mass spectrometer recording simultaneously spectra of thirty to a hundred thousand points on each polarity. We describe here a real-time pattern recognition algorithm developed at Lawrence Livermore National Laboratory that has been implemented on a nine Digital Signal Processor (DSP) system from Signatec Incorporated. The algorithm first preprocesses independently the raw time-of-flight data through an adaptive baseline removal routine. The next step consists of a polarity dependent calibration to a mass-to-charge representation, reducing the data to about five hundred to a thousand channels per polarity. Themore » last step is the identification step using a pattern recognition algorithm based on a library of known particle signatures including threat agents and background particles. The identification step includes integrating the two polarities for a final identification determination using a score-based rule tree. This algorithm, operating on multiple channels per-polarity and multiple polarities, is well suited for parallel real-time processing. It has been implemented on the PMP8A from Signatec Incorporated, which is a computer based board that can interface directly to the two one-Giga-Sample digitizers (PDA1000 from Signatec Incorporated) used to record the two polarities of time-of-flight data. By using optimized data separation, pipelining, and parallel processing across the nine DSPs it is possible to achieve a processing speed of up to a thousand particles per seconds, while maintaining the recognition rate observed on a non-real time implementation. This embedded system has allowed the BAMS technology to improve its throughput and therefore its sensitivity while maintaining a large dynamic range (number of channels and two polarities) thus maintaining the systems specificity for bio-detection.« less

  17. Latent Heating Retrieval from TRMM Observations Using a Simplified Thermodynamic Model

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Olson, William S.

    2003-01-01

    A procedure for the retrieval of hydrometeor latent heating from TRMM active and passive observations is presented. The procedure is based on current methods for estimating multiple-species hydrometeor profiles from TRMM observations. The species include: cloud water, cloud ice, rain, and graupel (or snow). A three-dimensional wind field is prescribed based on the retrieved hydrometeor profiles, and, assuming a steady-state, the sources and sinks in the hydrometeor conservation equations are determined. Then, the momentum and thermodynamic equations, in which the heating and cooling are derived from the hydrometeor sources and sinks, are integrated one step forward in time. The hydrometeor sources and sinks are reevaluated based on the new wind field, and the momentum and thermodynamic equations are integrated one more step. The reevalution-integration process is repeated until a steady state is reached. The procedure is tested using cloud model simulations. Cloud-model derived fields are used to synthesize TRMM observations, from which hydrometeor profiles are derived. The procedure is applied to the retrieved hydrometeor profiles, and the latent heating estimates are compared to the actual latent heating produced by the cloud model. Examples of procedure's applications to real TRMM data are also provided.

  18. A pilot randomized clinical trial testing integrated 12-Step facilitation (iTSF) treatment for adolescent substance use disorder.

    PubMed

    Kelly, John F; Kaminer, Yifrah; Kahler, Christopher W; Hoeppner, Bettina; Yeterian, Julie; Cristello, Julie V; Timko, Christine

    2017-12-01

    The integration of 12-Step philosophy and practices is common in adolescent substance use disorder (SUD) treatment programs, particularly in North America. However, although numerous experimental studies have tested 12-Step facilitation (TSF) treatments among adults, no studies have tested TSF-specific treatments for adolescents. We tested the efficacy of a novel integrated TSF. Explanatory, parallel-group, randomized clinical trial comparing 10 sessions of either motivational enhancement therapy/cognitive-behavioral therapy (MET/CBT; n = 30) or a novel integrated TSF (iTSF; n = 29), with follow-up assessments at 3, 6 and 9 months following treatment entry. Out-patient addiction clinic in the United States. Adolescents [n = 59; mean age = 16.8 (1.7) years; range = 14-21; 27% female; 78% white]. The iTSF integrated 12-Step with motivational and cognitive-behavioral strategies, and was compared with state-of-the-art MET/CBT for SUD. Primary outcome: percentage days abstinent (PDA); secondary outcomes: 12-Step attendance, substance-related consequences, longest period of abstinence, proportion abstinent/mostly abstinent, psychiatric symptoms. Primary outcome: PDA was not significantly different across treatments [b = 0.08, 95% confidence interval (CI) = -0.08 to 0.24, P = 0.33; Bayes' factor = 0.28). during treatment, iTSF patients had substantially greater 12-Step attendance, but this advantage declined thereafter (b = -0.87; 95% CI = -1.67 to 0.07, P = 0.03). iTSF did show a significant advantage at all follow-up points for substance-related consequences (b = -0.42; 95% CI = -0.80 to -0.04, P < 0.05; effect size range d = 0.26-0.71). Other secondary outcomes did not differ significantly between treatments, but effect sizes tended to favor iTSF. Throughout the entire sample, greater 12-Step meeting attendance was associated significantly with longer abstinence during (r = 0.39, P = 0.008), and early following (r = 0.30, P = 0.049), treatment. Compared with motivational enhancement therapy/cognitive-behavioral therapy (MET/CBT), in terms of abstinence, a novel integrated 12-Step facilitation treatment for adolescent substance use disorder (iTSF) showed no greater benefits, but showed benefits in terms of 12-Step attendance and consequences. Given widespread use of combinations of 12-Step, MET and CBT in adolescent community out-patient settings in North America, iTSF may provide an integrated evidence-based option that is compatible with existing practices. © 2017 Society for the Study of Addiction.

  19. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  20. Creep Life of Ceramic Components Using a Finite-Element-Based Integrated Design Program (CARES/CREEP)

    NASA Technical Reports Server (NTRS)

    Powers, L. M.; Jadaan, O. M.; Gyekenyesi, J. P.

    1998-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural application such as in advanced turbine engine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilizes commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life, of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the Ceramics Analysis and Reliability Evaluation of Structures/CREEP (CARES/CREEP) integrated design program, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benchmark problems and engine components are included.

  1. Creep Life of Ceramic Components Using a Finite-Element-Based Integrated Design Program (CARES/CREEP)

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.; Powers, L. M.; Jadaan, O. M.

    1998-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilized commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the CARES/CREEP (Ceramics Analysis and Reliability Evaluation of Structures/CREEP) integrated design programs, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benechmark problems and engine components are included.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less

  3. A Low Cost Approach to the Design of Autopilot for Hypersonic Glider

    NASA Astrophysics Data System (ADS)

    Liang, Wang; Weihua, Zhang; Ke, Peng; Donghui, Wang

    2017-12-01

    This paper proposes a novel integrated guidance and control (IGC) approach to improve the autopilot design with low cost for hypersonic glider in dive and pull-up phase. The main objective is robust and adaptive tracking of flight path angle (FPA) under severe flight scenarios. Firstly, the nonlinear IGC model is developed with a second order actuator dynamics. Then the adaptive command filtered back-stepping control is implemented to deal with the large aerodynamics coefficient uncertainties, control surface uncertainties and unmatched time-varying disturbances. For the autopilot, a back-stepping sliding mode control is designed to track the control surface deflection, and a nonlinear differentiator is used to avoid direct differentiating the control input. Through a series of 6-DOF numerical simulations, it’s shown that the proposed scheme successfully cancels out the large uncertainties and disturbances in tracking different kinds of FPA trajectory. The contribution of this paper lies in the application and determination of nonlinear integrated design of guidance and control system for hypersonic glider.

  4. One-pot aldol condensation and hydrodeoxygenation of biomass-derived carbonyl compounds for biodiesel synthesis.

    PubMed

    Faba, Laura; Díaz, Eva; Ordóñez, Salvador

    2014-10-01

    Integrating reaction steps is of key interest in the development of processes for transforming lignocellulosic materials into drop-in fuels. We propose a procedure for performing the aldol condensation (reaction between furfural and acetone is taken as model reaction) and the total hydrodeoxygenation of the resulting condensation adducts in one step, yielding n-alkanes. Different combinations of catalysts (bifunctional catalysts or mechanical mixtures), reaction conditions, and solvents (aqueous and organic) have been tested for performing these reactions in an isothermal batch reactor. The results suggest that the use of bifunctional catalysts and aqueous phase lead to an effective integration of both reactions. Therefore, selectivities to n-alkanes higher than 50% were obtained using this catalyst at typical hydrogenation conditions (T=493 K, P=4.5 MPa, 24 h reaction time). The use of organic solvent, carbonaceous supports, or mechanical mixtures of monofunctional catalysts leads to poorer results owing to side effects; mainly, hydrogenation of reactants and adsorption processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Design of a new integrated chitosan-PAMAM dendrimer biosorbent for heavy metals removing and study of its adsorption kinetics and thermodynamics.

    PubMed

    Zarghami, Zabihullah; Akbari, Ahmad; Latifi, Ali Mohammad; Amani, Mohammad Ali

    2016-04-01

    In this research, different generations of PAMAM-grafted chitosan as integrated biosorbents were successfully synthesized via step by step divergent growth approach of dendrimer. The synthesized products were utilized as adsorbents for heavy metals (Pb(2+) in this study) removing from aqueous solution and their reactive Pb(2+) removal potential was evaluated. The results showed that as-synthesized products with higher generations of dendrimer, have more adsorption capacity compared to products with lower generations of dendrimer and sole chitosan. Adsorption capacity of as-prepared product with generation 3 of dendrimer is 18times more than sole chitosan. Thermodynamic and kinetic studies were performed for understanding equilibrium data of the uptake capacity and kinetic rate uptake, respectively. Thermodynamic and kinetic studies showed that Langmuir isotherm model and pseudo second order kinetic model are more compatible for describing equilibrium data of the uptake capacity and kinetic rate of the Pb(2+) uptake, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Adaptive Integration of Nonsmooth Dynamical Systems

    DTIC Science & Technology

    2017-10-11

    controlled time stepping method to interactively design running robots. [1] John Shepherd, Samuel Zapolsky, and Evan M. Drumwright, “Fast multi-body...software like this to test software running on my robots. Started working in simulation after attempting to use software like this to test software... running on my robots. The libraries that produce these beautiful results have failed at simulating robotic manipulation. Postulate: It is easier to

  7. Measurement methods to build up the digital optical twin

    NASA Astrophysics Data System (ADS)

    Prochnau, Marcel; Holzbrink, Michael; Wang, Wenxin; Holters, Martin; Stollenwerk, Jochen; Loosen, Peter

    2018-02-01

    The realization of the Digital Optical Twin (DOT), which is in short the digital representation of the physical state of an optical system, is particularly useful in the context of an automated assembly process of optical systems. During the assembly process, the physical system status of the optical system is continuously measured and compared with the digital model. In case of deviations between physical state and the digital model, the latter one is adapted to match the physical state. To reach the goal described above, in a first step measurement/characterization technologies concerning their suitability to generate a precise digital twin of an existing optical system have to be identified and evaluated. This paper gives an overview of possible characterization methods and, finally, shows first results of evaluated, compared methods (e.g. spot-radius, MTF, Zernike-polynomials), to create a DOT. The focus initially lies on the unequivocalness of the optimization results as well as on the computational time required for the optimization to reach the characterized system state. Possible sources of error are the measurement accuracy (to characterize the system) , execution time of the measurement, time needed to map the digital to the physical world (optimization step) as well as interface possibilities to integrate the measurement tool into an assembly cell. Moreover, it is to be discussed whether the used measurement methods are suitable for a `seamless' integration into an assembly cell.

  8. Variational Algorithms for Test Particle Trajectories

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2015-11-01

    The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.

  9. A discontinuous Galerkin method for the shallow water equations in spherical triangular coordinates

    NASA Astrophysics Data System (ADS)

    Läuter, Matthias; Giraldo, Francis X.; Handorf, Dörthe; Dethloff, Klaus

    2008-12-01

    A global model of the atmosphere is presented governed by the shallow water equations and discretized by a Runge-Kutta discontinuous Galerkin method on an unstructured triangular grid. The shallow water equations on the sphere, a two-dimensional surface in R3, are locally represented in terms of spherical triangular coordinates, the appropriate local coordinate mappings on triangles. On every triangular grid element, this leads to a two-dimensional representation of tangential momentum and therefore only two discrete momentum equations. The discontinuous Galerkin method consists of an integral formulation which requires both area (elements) and line (element faces) integrals. Here, we use a Rusanov numerical flux to resolve the discontinuous fluxes at the element faces. A strong stability-preserving third-order Runge-Kutta method is applied for the time discretization. The polynomial space of order k on each curved triangle of the grid is characterized by a Lagrange basis and requires high-order quadature rules for the integration over elements and element faces. For the presented method no mass matrix inversion is necessary, except in a preprocessing step. The validation of the atmospheric model has been done considering standard tests from Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow water equations in spherical geometry, J. Comput. Phys. 102 (1992) 211-224], unsteady analytical solutions of the nonlinear shallow water equations and a barotropic instability caused by an initial perturbation of a jet stream. A convergence rate of O(Δx) was observed in the model experiments. Furthermore, a numerical experiment is presented, for which the third-order time-integration method limits the model error. Thus, the time step Δt is restricted by both the CFL-condition and accuracy demands. Conservation of mass was shown up to machine precision and energy conservation converges for both increasing grid resolution and increasing polynomial order k.

  10. Parkinson's and Alzheimer's diseases in Costa Rica: a feasibility study toward a national screening program

    PubMed Central

    Wesseling, Catharina; Román, Norbel; Quirós, Indiana; Páez, Laura; García, Vilma; María Mora, Ana; Juncos, Jorge L.; Steenland, Kyle N.

    2013-01-01

    Background The integration of mental and neurologic services in healthcare is a global priority. The universal Social Security of Costa Rica aspires to develop national screening of neurodegenerative disorders among the elderly, as part of the non-communicable disease agenda. Objective This study assessed the feasibility of routine screening for Parkinson's disease (PD) and Alzheimer's disease (AD) within the public healthcare system of Costa Rica. Design The population (aged ≥65) in the catchment areas of two primary healthcare clinics was targeted for motor and cognitive screening during routine annual health check-ups. The screening followed a tiered three-step approach, with increasing specificity. Step 1 involved a two-symptom questionnaire (tremor-at-rest; balance) and a spiral drawing test for motor assessment, as well as a three-word recall and animal category fluency test for cognitive assessment. Step 2 (for those failing Step 1) was a 10-item version of the Unified Parkinson Disease Rating Scale and the Mini-Mental State Examination. Step 3 (for those failing Step 2) was a comprehensive neurologic exam with definitive diagnosis of PD, AD, mild cognitive impairment (MCI), other disorders, or subjects who were healthy. Screening parameters and disease prevalence were calculated. Results Of the 401 screened subjects (80% of target population), 370 (92%), 163 (45%), and 81 (56%) failed in Step 1, Step 2, and Step 3, respectively. Thirty-three, 20, and 35 patients were diagnosed with PD, AD, and MCI, respectively (7 were PD with MCI/AD); 90% were new cases. Step 1 sensitivities of motor and cognitive assessments regarding Step 2 were both 93%, and Step 2 sensitivities regarding definitive diagnosis 100 and 96%, respectively. Specificities for Step 1 motor and cognitive tests were low (23% and 29%, respectively) and for Step 2 tests acceptable (76%, 94%). Based on international data, PD prevalence was 3.7 times higher than expected; AD prevalence was as expected. Conclusion Proposed protocol adjustments will increase test specificity and reduce administration time. A routine screening program is feasible within the public healthcare system of Costa Rica. PMID:24378195

  11. Parkinson's and Alzheimer's diseases in Costa Rica: a feasibility study toward a national screening program.

    PubMed

    Wesseling, Catharina; Román, Norbel; Quirós, Indiana; Páez, Laura; García, Vilma; Mora, Ana María; Juncos, Jorge L; Steenland, Kyle N

    2013-12-27

    The integration of mental and neurologic services in healthcare is a global priority. The universal Social Security of Costa Rica aspires to develop national screening of neurodegenerative disorders among the elderly, as part of the non-communicable disease agenda. This study assessed the feasibility of routine screening for Parkinson's disease (PD) and Alzheimer's disease (AD) within the public healthcare system of Costa Rica. The population (aged ≥65) in the catchment areas of two primary healthcare clinics was targeted for motor and cognitive screening during routine annual health check-ups. The screening followed a tiered three-step approach, with increasing specificity. Step 1 involved a two-symptom questionnaire (tremor-at-rest; balance) and a spiral drawing test for motor assessment, as well as a three-word recall and animal category fluency test for cognitive assessment. Step 2 (for those failing Step 1) was a 10-item version of the Unified Parkinson Disease Rating Scale and the Mini-Mental State Examination. Step 3 (for those failing Step 2) was a comprehensive neurologic exam with definitive diagnosis of PD, AD, mild cognitive impairment (MCI), other disorders, or subjects who were healthy. Screening parameters and disease prevalence were calculated. Of the 401 screened subjects (80% of target population), 370 (92%), 163 (45%), and 81 (56%) failed in Step 1, Step 2, and Step 3, respectively. Thirty-three, 20, and 35 patients were diagnosed with PD, AD, and MCI, respectively (7 were PD with MCI/AD); 90% were new cases. Step 1 sensitivities of motor and cognitive assessments regarding Step 2 were both 93%, and Step 2 sensitivities regarding definitive diagnosis 100 and 96%, respectively. Specificities for Step 1 motor and cognitive tests were low (23% and 29%, respectively) and for Step 2 tests acceptable (76%, 94%). Based on international data, PD prevalence was 3.7 times higher than expected; AD prevalence was as expected. Proposed protocol adjustments will increase test specificity and reduce administration time. A routine screening program is feasible within the public healthcare system of Costa Rica.

  12. Combination of the pair density approximation and the Takahashi–Imada approximation for path integral Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zillich, Robert E., E-mail: robert.zillich@jku.at

    2015-11-15

    We construct an accurate imaginary time propagator for path integral Monte Carlo simulations for heterogeneous systems consisting of a mixture of atoms and molecules. We combine the pair density approximation, which is highly accurate but feasible only for the isotropic interactions between atoms, with the Takahashi–Imada approximation for general interactions. We present finite temperature simulations results for energy and structure of molecules–helium clusters X{sup 4}He{sub 20} (X=HCCH and LiH) which show a marked improvement over the Trotter approximation which has a 2nd-order time step bias. We show that the 4th-order corrections of the Takahashi–Imada approximation can also be applied perturbativelymore » to a 2nd-order simulation.« less

  13. Intracranial Cortical Responses during Visual–Tactile Integration in Humans

    PubMed Central

    Quinn, Brian T.; Carlson, Chad; Doyle, Werner; Cash, Sydney S.; Devinsky, Orrin; Spence, Charles; Halgren, Eric

    2014-01-01

    Sensory integration of touch and sight is crucial to perceiving and navigating the environment. While recent evidence from other sensory modality combinations suggests that low-level sensory areas integrate multisensory information at early processing stages, little is known about how the brain combines visual and tactile information. We investigated the dynamics of multisensory integration between vision and touch using the high spatial and temporal resolution of intracranial electrocorticography in humans. We present a novel, two-step metric for defining multisensory integration. The first step compares the sum of the unisensory responses to the bimodal response as multisensory responses. The second step eliminates the possibility that double addition of sensory responses could be misinterpreted as interactions. Using these criteria, averaged local field potentials and high-gamma-band power demonstrate a functional processing cascade whereby sensory integration occurs late, both anatomically and temporally, in the temporo–parieto–occipital junction (TPOJ) and dorsolateral prefrontal cortex. Results further suggest two neurophysiologically distinct and temporally separated integration mechanisms in TPOJ, while providing direct evidence for local suppression as a dominant mechanism for synthesizing visual and tactile input. These results tend to support earlier concepts of multisensory integration as relatively late and centered in tertiary multimodal association cortices. PMID:24381279

  14. Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz

    NASA Astrophysics Data System (ADS)

    Vanicat, Matthieu

    2018-04-01

    We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.

  15. Integrated response toward HIV: a health promotion case study from China.

    PubMed

    Jiang, Zhen; Wang, Debin; Yang, Sen; Duan, Mingyue; Bu, Pengbin; Green, Andrew; Zhang, Xuejun

    2011-06-01

    Integrated HIV response refers to a formalized, collaborative process among organizations in communities with HIV at-risk populations. It is a both comprehensive and flexible scheme, which may include community-based environment promotion, skill coalition, fund linkage, human resource collaboration and service system jointly for both HIV prevention and control. It enables decisions and actions responds over time. In 1997, the Chinese government developed a 10-year HIV project supported by World Bank Loan (H9-HIV/AIDS/STIs). It was the first integrated STI/HIV intervention project in China and provides a unique opportunity to explore the long-term comprehensive STI/HIV intervention in a low-middle income country setting. Significant outcomes were identified as development and promotion of the national strategic plan and its ongoing implementation; positive knowledge, behavioral and STI/HIV prevalence rate change; and valuable experiences for managing integrated HIV/STI intervention projects. Essential factors for the success of the project and the key tasks for the next step were identified and included well-designed intervention in rural and low economic regions, unified program evaluation framework and real-time information collection and assessment.

  16. [Intelligent watch system for health monitoring based on Bluetooth low energy technology].

    PubMed

    Wang, Ji; Guo, Hailiang; Ren, Xiaoli

    2017-08-01

    According to the development status of wearable technology and the demand of intelligent health monitoring, we studied the multi-function integrated smart watches solution and its key technology. First of all, the sensor technology with high integration density, Bluetooth low energy (BLE) and mobile communication technology were integrated and used in develop practice. Secondly, for the hardware design of the system in this paper, we chose the scheme with high integration density and cost-effective computer modules and chips. Thirdly, we used real-time operating system FreeRTOS to develop the friendly graphical interface interacting with touch screen. At last, the high-performance application software which connected with BLE hardware wirelessly and synchronized data was developed based on android system. The function of this system included real-time calendar clock, telephone message, address book management, step-counting, heart rate and sleep quality monitoring and so on. Experiments showed that the collecting data accuracy of various sensors, system data transmission capacity, the overall power consumption satisfy the production standard. Moreover, the system run stably with low power consumption, which could realize intelligent health monitoring effectively.

  17. Evaluating quality of patient care communication in integrated care settings: a mixed method approach.

    PubMed

    Gulmans, J; Vollenbroek-Hutten, M M R; Van Gemert-Pijnen, J E W C; Van Harten, W H

    2007-10-01

    Owing to the involvement of multiple professionals from various institutions, integrated care settings are prone to suboptimal patient care communication. To assure continuity, communication gaps should be identified for targeted improvement initiatives. However, available assessment methods are often one-sided evaluations not appropriate for integrated care settings. We developed an evaluation approach that takes into account the multiple communication links and evaluation perspectives inherent to these settings. In this study, we describe this approach, using the integrated care setting of Cerebral Palsy as illustration. The approach follows a three-step mixed design in which the results of each step are used to mark out the subsequent step's focus. The first step patient questionnaire aims to identify quality gaps experienced by patients, comparing their expectancies and experiences with respect to patient-professional and inter-professional communication. Resulting gaps form the input of in-depth interviews with a subset of patients to evaluate underlying factors of ineffective communication. Resulting factors form the input of the final step's focus group meetings with professionals to corroborate and complete the findings. By combining methods, the presented approach aims to minimize limitations inherent to the application of single methods. The comprehensiveness of the approach enables its applicability in various integrated care settings. Its sequential design allows for in-depth evaluation of relevant quality gaps. Further research is needed to evaluate the approach's feasibility in practice. In our subsequent study, we present the results of the approach in the integrated care setting of children with Cerebral Palsy in three Dutch care regions.

  18. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations

    NASA Astrophysics Data System (ADS)

    Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.

    2013-08-01

    Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0…tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution time/parallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.

  19. Model-Based Systems

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    2007-01-01

    Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.

  20. Pseudo-time algorithms for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, E.

    1986-01-01

    A pseudo-time method is introduced to integrate the compressible Navier-Stokes equations to a steady state. This method is a generalization of a method used by Crocco and also by Allen and Cheng. We show that for a simple heat equation that this is just a renormalization of the time. For a convection-diffusion equation the renormalization is dependent only on the viscous terms. We implement the method for the Navier-Stokes equations using a Runge-Kutta type algorithm. This permits the time step to be chosen based on the inviscid model only. We also discuss the use of residual smoothing when viscous terms are present.

  1. A Numerical Scheme for Ordinary Differential Equations Having Time Varying and Nonlinear Coefficients Based on the State Transition Matrix

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2002-01-01

    A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.

  2. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations.

    PubMed

    Bylaska, Eric J; Weare, Jonathan Q; Weare, John H

    2013-08-21

    Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0[ellipsis (horizontal)]tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution/timeparallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.

  3. Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coimbra, Carlos F. M.

    2016-02-25

    In this project we address multiple resource integration challenges associated with increasing levels of solar penetration that arise from the variability and uncertainty in solar irradiance. We will model the SMUD service region as its own balancing region, and develop an integrated, real-time operational tool that takes solar-load forecast uncertainties into consideration and commits optimal energy resources and reserves for intra-hour and intra-day decisions. The primary objectives of this effort are to reduce power system operation cost by committing appropriate amount of energy resources and reserves, as well as to provide operators a prediction of the generation fleet’s behavior inmore » real time for realistic PV penetration scenarios. The proposed methodology includes the following steps: clustering analysis on the expected solar variability per region for the SMUD system, Day-ahead (DA) and real-time (RT) load forecasts for the entire service areas, 1-year of intra-hour CPR forecasts for cluster centers, 1-year of smart re-forecasting CPR forecasts in real-time for determination of irreducible errors, and uncertainty quantification for integrated solar-load for both distributed and central stations (selected locations within service region) PV generation.« less

  4. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.

  5. Integrating ethics in health technology assessment: many ways to Rome.

    PubMed

    Hofmann, Björn; Oortwijn, Wija; Bakke Lysdahl, Kristin; Refolo, Pietro; Sacchini, Dario; van der Wilt, Gert Jan; Gerhardus, Ansgar

    2015-01-01

    The aim of this study was to identify and discuss appropriate approaches to integrate ethical inquiry in health technology assessment (HTA). The key question is how ethics can be integrated in HTA. This is addressed in two steps: by investigating what it means to integrate ethics in HTA, and by assessing how suitable the various methods in ethics are to be integrated in HTA according to these meanings of integration. In the first step, we found that integrating ethics can mean that ethics is (a) subsumed under or (b) combined with other parts of the HTA process; that it can be (c) coordinated with other parts; or that (d) ethics actively interacts and changes other parts of the HTA process. For the second step, we found that the various methods in ethics have different merits with respect to the four conceptions of integration in HTA. Traditional approaches in moral philosophy tend to be most suited to be subsumed or combined, while processual approaches being close to the HTA or implementation process appear to be most suited to coordinated and interactive types of integration. The article provides a guide for choosing the ethics approach that appears most appropriate for the goals and process of a particular HTA.

  6. A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann

    2003-01-01

    A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.

  7. Step-by-step seeding procedure for preparing HKUST-1 membrane on porous α-alumina support.

    PubMed

    Nan, Jiangpu; Dong, Xueliang; Wang, Wenjin; Jin, Wanqin; Xu, Nanping

    2011-04-19

    Metal-organic framework (MOF) membranes have attracted considerable attention because of their striking advantages in small-molecule separation. The preparation of an integrated MOF membrane is still a major challenge. Depositing a uniform seed layer on a support for secondary growth is a main route to obtaining an integrated MOF membrane. A novel seeding method to prepare HKUST-1 (known as Cu(3)(btc)(2)) membranes on porous α-alumina supports is reported. The in situ production of the seed layer was realized in step-by-step fashion via the coordination of H(3)btc and Cu(2+) on an α-alumina support. The formation process of the seed layer was observed by ultraviolet-visible absorption spectroscopy and atomic force microscopy. An integrated HKUST-1 membrane could be synthesized by the secondary hydrothermal growth on the seeded support. The gas permeation performance of the membrane was evaluated. © 2011 American Chemical Society

  8. Pre-relaxation in weakly interacting models

    NASA Astrophysics Data System (ADS)

    Bertini, Bruno; Fagotti, Maurizio

    2015-07-01

    We consider time evolution in models close to integrable points with hidden symmetries that generate infinitely many local conservation laws that do not commute with one another. The system is expected to (locally) relax to a thermal ensemble if integrability is broken, or to a so-called generalised Gibbs ensemble if unbroken. In some circumstances expectation values exhibit quasi-stationary behaviour long before their typical relaxation time. For integrability-breaking perturbations, these are also called pre-thermalisation plateaux, and emerge e.g. in the strong coupling limit of the Bose-Hubbard model. As a result of the hidden symmetries, quasi-stationarity appears also in integrable models, for example in the Ising limit of the XXZ model. We investigate a weak coupling limit, identify a time window in which the effects of the perturbations become significant and solve the time evolution through a mean-field mapping. As an explicit example we study the XYZ spin-\\frac{1}{2} chain with additional perturbations that break integrability. One of the most intriguing results of the analysis is the appearance of persistent oscillatory behaviour. To unravel its origin, we study in detail a toy model: the transverse-field Ising chain with an additional nonlocal interaction proportional to the square of the transverse spin per unit length (2013 Phys. Rev. Lett. 111 197203). Despite being nonlocal, this belongs to a class of models that emerge as intermediate steps of the mean-field mapping and shares many dynamical properties with the weakly interacting models under consideration.

  9. A new algorithm for modeling friction in dynamic mechanical systems

    NASA Technical Reports Server (NTRS)

    Hill, R. E.

    1988-01-01

    A method of modeling friction forces that impede the motion of parts of dynamic mechanical systems is described. Conventional methods in which the friction effect is assumed a constant force, or torque, in a direction opposite to the relative motion, are applicable only to those cases where applied forces are large in comparison to the friction, and where there is little interest in system behavior close to the times of transitions through zero velocity. An algorithm is described that provides accurate determination of friction forces over a wide range of applied force and velocity conditions. The method avoids the simulation errors resulting from a finite integration interval used in connection with a conventional friction model, as is the case in many digital computer-based simulations. The algorithm incorporates a predictive calculation based on initial conditions of motion, externally applied forces, inertia, and integration step size. The predictive calculation in connection with an external integration process provides an accurate determination of both static and Coulomb friction forces and resulting motions in dynamic simulations. Accuracy of the results is improved over that obtained with conventional methods and a relatively large integration step size is permitted. A function block for incorporation in a specific simulation program is described. The general form of the algorithm facilitates implementation with various programming languages such as FORTRAN or C, as well as with other simulation programs.

  10. Microscope Integrated Intraoperative Spectral Domain Optical Coherence Tomography for Cataract Surgery: Uses and Applications.

    PubMed

    Das, Sudeep; Kummelil, Mathew Kurian; Kharbanda, Varun; Arora, Vishal; Nagappa, Somshekar; Shetty, Rohit; Shetty, Bhujang K

    2016-05-01

    To demonstrate the uses and applications of a microscope integrated intraoperative Optical Coherence Tomography in Micro Incision Cataract Surgery (MICS) and Femtosecond Laser Assisted Cataract Surgery (FLACS). Intraoperative real time imaging using the RESCAN™ 700 (Carl Zeiss Meditec, Oberkochen, Germany) was done for patients undergoing MICS as well as FLACS. The OCT videos were reviewed at each step of the procedure and the findings were noted and analyzed. Microscope Integrated Intraoperative Optical Coherence Tomography was found to be beneficial during all the critical steps of cataract surgery. We were able to qualitatively assess wound morphology in clear corneal incisions, in terms of subclinical Descemet's detachments, tears in the inner or outer wound lips, wound gaping at the end of surgery and in identifying the adequacy of stromal hydration, for both FLACS as well as MICS. It also enabled us to segregate true posterior polar cataracts from suspected cases intraoperatively. Deciding the adequate depth of trenching was made simpler with direct visualization. The final position of the intraocular lens in the capsular bag and the lack of bioadhesivity of hydrophobic acrylic lenses were also observed. Even though Microscope Integrated Intraoperative Optical Coherence Tomography is in its early stages for its application in cataract surgery, this initial assessment does show a very promising role for this technology in the future for cataract surgery both in intraoperative decision making as well as for training purposes.

  11. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.

    2013-08-21

    Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f , (e.g. Verlet algorithm) is available to propagate the system from time ti (trajectory positions and velocities xi = (ri; vi)) to time ti+1 (xi+1) by xi+1 = fi(xi), the dynamics problem spanning an interval from t0 : : : tM can be transformed into a root finding problem, F(X) = [xi - f (x(i-1)]i=1;M = 0, for the trajectory variables. The root finding problem is solved using amore » variety of optimization techniques, including quasi-Newton and preconditioned quasi-Newton optimization schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed and the effectiveness of various approaches to solving the root finding problem are tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl+4H2O AIMD simulation at the MP2 level. The maximum speedup obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow TCP/IP networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl+4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. By using these algorithms we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 seconds per time step to 6.9 seconds per time step.« less

  12. Large-eddy simulation of a backward facing step flow using a least-squares spectral element method

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Mittal, Rajat

    1996-01-01

    We report preliminary results obtained from the large eddy simulation of a backward facing step at a Reynolds number of 5100. The numerical platform is based on a high order Legendre spectral element spatial discretization and a least squares time integration scheme. A non-reflective outflow boundary condition is in place to minimize the effect of downstream influence. Smagorinsky model with Van Driest near wall damping is used for sub-grid scale modeling. Comparisons of mean velocity profiles and wall pressure show good agreement with benchmark data. More studies are needed to evaluate the sensitivity of this method on numerical parameters before it is applied to complex engineering problems.

  13. Improving equitable access to imaging under universal-access medicine: the ontario wait time information program and its impact on hospital policy and process.

    PubMed

    Kielar, Ania Z; El-Maraghi, Robert H; Schweitzer, Mark E

    2010-08-01

    In Canada, equal access to health care is the goal, but this is associated with wait times. Wait times should be fair rather than uniform, taking into account the urgency of the problem as well as the time an individual has already waited. In November 2004, the Ontario government began addressing this issue. One of the first steps was to institute benchmarks reflecting "acceptable" wait times for CT and MRI. A public Web site was developed indicating wait times at each Local Health Integration Network. Since starting the Wait Time Information Program, there has been a sustained reduction in wait times for Ontarians requiring CT and MRI. The average wait time for a CT scan went from 81 days in September 2005 to 47 days in September 2009. For MRI, the resulting wait time was reduced from 120 to 105 days. Increased patient scans have been achieved by purchasing new CT and MRI scanners, expanding hours of operation, and improving patient throughput using strategies learned from the Lean initiative, based on Toyota's manufacturing philosophy for car production. Institution-specific changes in booking procedures have been implemented. Concurrently, government guidelines have been developed to ensure accountability for monies received. The Ontario Wait Time Information Program is an innovative first step in improving fair and equitable access to publicly funded imaging services. There have been reductions in wait times for both CT and MRI. As various new processes are implemented, further review will be necessary for each step to determine their individual efficacy. Copyright 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  14. A global multilevel atmospheric model using a vector semi-Lagrangian finite-difference scheme. I - Adiabatic formulation

    NASA Technical Reports Server (NTRS)

    Bates, J. R.; Moorthi, S.; Higgins, R. W.

    1993-01-01

    An adiabatic global multilevel primitive equation model using a two time-level, semi-Lagrangian semi-implicit finite-difference integration scheme is presented. A Lorenz grid is used for vertical discretization and a C grid for the horizontal discretization. The momentum equation is discretized in vector form, thus avoiding problems near the poles. The 3D model equations are reduced by a linear transformation to a set of 2D elliptic equations, whose solution is found by means of an efficient direct solver. The model (with minimal physics) is integrated for 10 days starting from an initialized state derived from real data. A resolution of 16 levels in the vertical is used, with various horizontal resolutions. The model is found to be stable and efficient, and to give realistic output fields. Integrations with time steps of 10 min, 30 min, and 1 h are compared, and the differences are found to be acceptable.

  15. Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams

    NASA Astrophysics Data System (ADS)

    Willow, Soohaeng Yoo; Hirata, So

    2014-01-01

    A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.

  16. Digital computer program for generating dynamic turbofan engine models (DIGTEM)

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Krosel, S. M.; Szuch, J. R.; Westerkamp, E. J.

    1983-01-01

    This report describes DIGTEM, a digital computer program that simulates two spool, two-stream turbofan engines. The turbofan engine model in DIGTEM contains steady-state performance maps for all of the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. Altogether there are 16 state variables and state equations. DIGTEM features a backward-differnce integration scheme for integrating stiff systems. It trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off-design points and iterates to a balanced engine condition. Transients can also be run. They are generated by defining controls as a function of time (open-loop control) in a user-written subroutine (TMRSP). DIGTEM has run on the IBM 370/3033 computer using implicit integration with time steps ranging from 1.0 msec to 1.0 sec. DIGTEM is generalized in the aerothermodynamic treatment of components.

  17. Determining a Method of Enabling and Disabling the Integral Torque in the SDO Science and Inertial Mode Controllers

    NASA Technical Reports Server (NTRS)

    Vess, Melissa F.; Starin, Scott R.

    2007-01-01

    During design of the SDO Science and Inertial mode PID controllers, the decision was made to disable the integral torque whenever system stability was in question. Three different schemes were developed to determine when to disable or enable the integral torque, and a trade study was performed to determine which scheme to implement. The trade study compared complexity of the control logic, risk of not reenabling the integral gain in time to reject steady-state error, and the amount of integral torque space used. The first scheme calculated a simplified Routh criterion to determine when to disable the integral torque. The second scheme calculates the PD part of the torque and looked to see if that torque would cause actuator saturation. If so, only the PD torque is used. If not, the integral torque is added. Finally, the third scheme compares the attitude and rate errors to limits and disables the integral torque if either of the errors is greater than the limit. Based on the trade study results, the third scheme was selected. Once it was decided when to disable the integral torque, analysis was performed to determine how to disable the integral torque and whether or not to reset the integrator once the integral torque was reenabled. Three ways to disable the integral torque were investigated: zero the input into the integrator, which causes the integral part of the PID control torque to be held constant; zero the integral torque directly but allow the integrator to continue integrating; or zero the integral torque directly and reset the integrator on integral torque reactivation. The analysis looked at complexity of the control logic, slew time plus settling time between each calibration maneuver step, and ability to reject steady-state error. Based on the results of the analysis, the decision was made to zero the input into the integrator without resetting it. Throughout the analysis, a high fidelity simulation was used to test the various implementation methods.

  18. Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system

    NASA Astrophysics Data System (ADS)

    Freitas, Rodrigo; Frolov, Timofey; Asta, Mark

    2017-10-01

    Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.

  19. Coherence rephasing combined with spin-wave storage using chirped control pulses

    NASA Astrophysics Data System (ADS)

    Demeter, Gabor

    2014-06-01

    Photon-echo based optical quantum memory schemes often employ intermediate steps to transform optical coherences to spin coherences for longer storage times. We analyze a scheme that uses three identical chirped control pulses for coherence rephasing in an inhomogeneously broadened ensemble of three-level Λ systems. The pulses induce a cyclic permutation of the atomic populations in the adiabatic regime. Optical coherences created by a signal pulse are stored as spin coherences at an intermediate time interval, and are rephased for echo emission when the ensemble is returned to the initial state. Echo emission during a possible partial rephasing when the medium is inverted can be suppressed with an appropriate choice of control pulse wave vectors. We demonstrate that the scheme works in an optically dense ensemble, despite control pulse distortions during propagation. It integrates conveniently the spin-wave storage step into memory schemes based on a second rephasing of the atomic coherences.

  20. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  1. A network of spiking neurons for computing sparse representations in an energy-efficient way.

    PubMed

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B

    2012-11-01

    Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.

  2. Blocks in the asymmetric simple exclusion process

    NASA Astrophysics Data System (ADS)

    Tracy, Craig A.; Widom, Harold

    2017-12-01

    In earlier work, the authors obtained formulas for the probability in the asymmetric simple exclusion process that the mth particle from the left is at site x at time t. They were expressed in general as sums of multiple integrals and, for the case of step initial condition, as an integral involving a Fredholm determinant. In the present work, these results are generalized to the case where the mth particle is the left-most one in a contiguous block of L particles. The earlier work depended in a crucial way on two combinatorial identities, and the present work begins with a generalization of these identities to general L.

  3. A computer model for one-dimensional mass and energy transport in and around chemically reacting particles, including complex gas-phase chemistry, multicomponent molecular diffusion, surface evaporation, and heterogeneous reaction

    NASA Technical Reports Server (NTRS)

    Cho, S. Y.; Yetter, R. A.; Dryer, F. L.

    1992-01-01

    Various chemically reacting flow problems highlighting chemical and physical fundamentals rather than flow geometry are presently investigated by means of a comprehensive mathematical model that incorporates multicomponent molecular diffusion, complex chemistry, and heterogeneous processes, in the interest of obtaining sensitivity-related information. The sensitivity equations were decoupled from those of the model, and then integrated one time-step behind the integration of the model equations, and analytical Jacobian matrices were applied to improve the accuracy of sensitivity coefficients that are calculated together with model solutions.

  4. [The hybrid operating room. Home of high-end intraoperative imaging].

    PubMed

    Gebhard, F; Riepl, C; Richter, P; Liebold, A; Gorki, H; Wirtz, R; König, R; Wilde, F; Schramm, A; Kraus, M

    2012-02-01

    A hybrid operating room must serve the medical needs of different highly specialized disciplines. It integrates interventional techniques for cardiovascular procedures and allows operations in the field of orthopaedic surgery, neurosurgery and maxillofacial surgery. The integration of all steps such as planning, documentation and the procedure itself saves time and precious resources. The best available imaging devices and user interfaces reduce the need for extensive personnel in the OR and facilitate new minimally invasive procedures. The immediate possibility of postoperative control images in CT-like quality enables the surgeon to react to problems during the same procedure without the need for later revision.

  5. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  6. Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean

    2017-10-01

    Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.

  7. [Management of spine injuries in polytraumatized patients].

    PubMed

    Heyde, C E; Ertel, W; Kayser, R

    2005-09-01

    The management of spine injuries in polytraumatized patients remains a great challenge for the diagnostic procedures and institution of appropriate treatment by integrating spinal trauma treatment into the whole treatment concept as well as following the treatment steps for the injured spine itself. The established concept of "damage control" and criteria regarding the optimal time and manner for operative treatment of the injured spine in the polytrauma setting is presented and discussed.

  8. Unsteady Flow Simulation: A Numerical Challenge

    DTIC Science & Technology

    2003-03-01

    drive to convergence the numerical unsteady term. The time marching procedure is based on the approximate implicit Newton method for systems of non...computed through analytical derivatives of S. The linear system stemming from equation (3) is solved at each integration step by the same iterative method...significant reduction of memory usage, thanks to the reduced dimensions of the linear system matrix during the implicit marching of the solution. The

  9. Aptamer entrapment in microfluidic channel using one-step sol-gel process, in view of the integration of a new selective extraction phase for lab-on-a-chip.

    PubMed

    Perréard, Camille; d'Orlyé, Fanny; Griveau, Sophie; Liu, Baohong; Bedioui, Fethi; Varenne, Anne

    2017-10-01

    There is a great demand for integrating sample treatment into μTASs. In this context, we developed a new sol-gel phase for extraction of trace compounds in complex matrices. For this purpose, the incorporation of aptamers in silica-based gel within PDMS/glass microfluidic channels was performed for the first time by a one-step sol-gel process. The effective gel attachment onto microchannel walls and aptamer incorporation in the polymerized gel were evaluated using fluorescence microscopy. A good gel stability and aptamer incorporation inside the microchannel was demonstrated upon rinsing and over storage time. The ability of gel-encapsulated aptamers to interact with its specific target (either sulforhodamine B as model fluorescent target, or diclofenac, a pain killer drug) was assessed too. The binding capacity of entrapped aptamers was quantified (in the micromolar range) and the selectivity of the interaction was evidenced. Preservation of aptamers binding affinity to target molecules was therefore demonstrated. Dissociation constant of the aptamer-target complex and interaction selectivity were evaluated similar to those in bulk solution. This opens the way to new selective on-chip SPE techniques for sample pretreatment. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  11. Group implicit concurrent algorithms in nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Sotelino, E. D.

    1989-01-01

    During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.

  12. The development and validation of a numerical integration method for non-linear viscoelastic modeling

    PubMed Central

    Ramo, Nicole L.; Puttlitz, Christian M.

    2018-01-01

    Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558

  13. Advanced Ceramic Technology for Space Applications at NASA MSFC

    NASA Technical Reports Server (NTRS)

    Alim, Mohammad A.

    2003-01-01

    The ceramic processing technology using conventional methods is applied to the making of the state-of-the-art ceramics known as smart ceramics or intelligent ceramics or electroceramics. The sol-gel and wet chemical processing routes are excluded in this investigation considering economic aspect and proportionate benefit of the resulting product. The use of ceramic ingredients in making coatings or devices employing vacuum coating unit is also excluded in this investigation. Based on the present information it is anticipated that the conventional processing methods provide identical performing ceramics when compared to that processed by the chemical routes. This is possible when sintering temperature, heating and cooling ramps, peak temperature (sintering temperature), soak-time (hold-time), etc. are considered as variable parameters. In addition, optional calcination step prior to the sintering operation remains as a vital variable parameter. These variable parameters constitute a sintering profile to obtain a sintered product. Also it is possible to obtain identical products for more than one sintering profile attributing to the calcination step in conjunction with the variables of the sintering profile. Overall, the state-of-the-art ceramic technology is evaluated for potential thermal and electrical insulation coatings, microelectronics and integrated circuits, discrete and integrated devices, etc. applications in the space program.

  14. Black box integration of computer-aided diagnosis into PACS deserves a second chance: results of a usability study concerning bone age assessment.

    PubMed

    Geldermann, Ina; Grouls, Christoph; Kuhl, Christiane; Deserno, Thomas M; Spreckelsen, Cord

    2013-08-01

    Usability aspects of different integration concepts for picture archiving and communication systems (PACS) and computer-aided diagnosis (CAD) were inquired on the example of BoneXpert, a program determining the skeletal age from a left hand's radiograph. CAD-PACS integration was assessed according to its levels: data, function, presentation, and context integration focusing on usability aspects. A user-based study design was selected. Statements of seven experienced radiologists using two alternative types of integration provided by BoneXpert were acquired and analyzed using a mixed-methods approach based on think-aloud records and a questionnaire. In both variants, the CAD module (BoneXpert) was easily integrated in the workflow, found comprehensible and fitting in the conceptual framework of the radiologists. Weak points of the software integration referred to data and context integration. Surprisingly, visualization of intermediate image processing states (presentation integration) was found less important as compared to efficient handling and fast computation. Seamlessly integrating CAD into the PACS without additional work steps or unnecessary interrupts and without visualizing intermediate images may considerably improve software performance and user acceptance with efforts in time.

  15. Real-Time Decision Making and Aggressive Behavior in Youth: A Heuristic Model of Response Evaluation and Decision (RED)

    PubMed Central

    Fontaine, Reid Griffith; Dodge, Kenneth A.

    2009-01-01

    Considerable scientific and intervention attention has been paid to judgment and decision-making systems associated with aggressive behavior in youth. However, most empirical studies have investigated social-cognitive correlates of stable child and adolescent aggressiveness, and less is known about real-time decision making to engage in aggressive behavior. A model of real-time decision making must incorporate both impulsive actions and rational thought. The present paper advances a process model (response evaluation and decision; RED) of real-time behavioral judgments and decision making in aggressive youths with mathematic representations that may be used to quantify response strength. These components are a heuristic to describe decision making, though it is doubtful that individuals always mentally complete these steps. RED represents an organization of social–cognitive operations believed to be active during the response decision step of social information processing. The model posits that RED processes can be circumvented through impulsive responding. This article provides a description and integration of thoughtful, rational decision making and nonrational impulsivity in aggressive behavioral interactions. PMID:20802851

  16. Real-Time Decision Making and Aggressive Behavior in Youth: A Heuristic Model of Response Evaluation and Decision (RED).

    PubMed

    Fontaine, Reid Griffith; Dodge, Kenneth A

    2006-11-01

    Considerable scientific and intervention attention has been paid to judgment and decision-making systems associated with aggressive behavior in youth. However, most empirical studies have investigated social-cognitive correlates of stable child and adolescent aggressiveness, and less is known about real-time decision making to engage in aggressive behavior. A model of real-time decision making must incorporate both impulsive actions and rational thought. The present paper advances a process model (response evaluation and decision; RED) of real-time behavioral judgments and decision making in aggressive youths with mathematic representations that may be used to quantify response strength. These components are a heuristic to describe decision making, though it is doubtful that individuals always mentally complete these steps. RED represents an organization of social-cognitive operations believed to be active during the response decision step of social information processing. The model posits that RED processes can be circumvented through impulsive responding. This article provides a description and integration of thoughtful, rational decision making and nonrational impulsivity in aggressive behavioral interactions.

  17. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.

  18. Pretreatment efficiency and structural characterization of rice straw by an integrated process of dilute-acid and steam explosion for bioethanol production.

    PubMed

    Chen, Wen-Hua; Pen, Ben-Li; Yu, Ching-Tsung; Hwang, Wen-Song

    2011-02-01

    The combined pretreatment of rice straw using dilute-acid and steam explosion followed by enzymatic hydrolysis was investigated and compared with acid-catalyzed steam explosion pretreatment. In addition to measuring the chemical composition, including glucan, xylan and lignin content, changes in rice straw features after pretreatment were investigated in terms of the straw's physical properties. These properties included crystallinity, surface area, mean particle size and scanning electron microscopy imagery. The effect of acid concentration on the acid-catalyzed steam explosion was studied in a range between 1% and 15% acid at 180°C for 2 min. We also investigated the influence of the residence time of the steam explosion in the combined pretreatment and the optimum conditions for the dilute-acid hydrolysis step in order to develop an integrated process for the dilute-acid and steam explosion. The optimum operational conditions for the first dilute-acid hydrolysis step were determined to be 165°C for 2 min with 2% H(2)SO(4) and for the second steam explosion step was to be carried out at 180°C for 20 min; this gave the most favorable combination in terms of an integrated process. We found that rice straw pretreated by the dilute-acid/steam explosions had a higher xylose yield, a lower level of inhibitor in the hydrolysate and a greater degree of enzymatic hydrolysis; this resulted in a 1.5-fold increase in the overall sugar yield when compared to the acid-catalyzed steam explosion. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  20. 3D integrated superconducting qubits

    NASA Astrophysics Data System (ADS)

    Rosenberg, D.; Kim, D.; Das, R.; Yost, D.; Gustavsson, S.; Hover, D.; Krantz, P.; Melville, A.; Racz, L.; Samach, G. O.; Weber, S. J.; Yan, F.; Yoder, J. L.; Kerman, A. J.; Oliver, W. D.

    2017-10-01

    As the field of quantum computing advances from the few-qubit stage to larger-scale processors, qubit addressability and extensibility will necessitate the use of 3D integration and packaging. While 3D integration is well-developed for commercial electronics, relatively little work has been performed to determine its compatibility with high-coherence solid-state qubits. Of particular concern, qubit coherence times can be suppressed by the requisite processing steps and close proximity of another chip. In this work, we use a flip-chip process to bond a chip with superconducting flux qubits to another chip containing structures for qubit readout and control. We demonstrate that high qubit coherence (T1, T2,echo > 20 μs) is maintained in a flip-chip geometry in the presence of galvanic, capacitive, and inductive coupling between the chips.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Argo, P.E.; DeLapp, D.; Sutherland, C.D.

    TRACKER is an extension of a three-dimensional Hamiltonian raytrace code developed some thirty years ago by R. Michael Jones. Subsequent modifications to this code, which is commonly called the {open_quotes}Jones Code,{close_quotes} were documented by Jones and Stephensen (1975). TRACKER incorporates an interactive user`s interface, modern differential equation integrators, graphical outputs, homing algorithms, and the Ionospheric Conductivity and Electron Density (ICED) ionosphere. TRACKER predicts the three-dimensional paths of radio waves through model ionospheres by numerically integrating Hamilton`s equations, which are a differential expression of Fermat`s principle of least time. By using continuous models, the Hamiltonian method avoids false caustics and discontinuousmore » raypath properties often encountered in other raytracing methods. In addition to computing the raypath, TRACKER also calculates the group path (or pulse travel time), the phase path, the geometrical (or {open_quotes}real{close_quotes}) pathlength, and the Doppler shift (if the time variation of the ionosphere is explicitly included). Computational speed can be traded for accuracy by specifying the maximum allowable integration error per step in the integration. Only geometrical optics are included in the main raytrace code; no partial reflections or diffraction effects are taken into account. In addition, TRACKER does not lend itself to statistical descriptions of propagation -- it requires a deterministic model of the ionosphere.« less

  2. Langevin dynamics in inhomogeneous media: Re-examining the Itô-Stratonovich dilemma

    NASA Astrophysics Data System (ADS)

    Farago, Oded; Grønbech-Jensen, Niels

    2014-01-01

    The diffusive dynamics of a particle in a medium with space-dependent friction coefficient is studied within the framework of the inertial Langevin equation. In this description, the ambiguous interpretation of the stochastic integral, known as the Itô-Stratonovich dilemma, is avoided since all interpretations converge to the same solution in the limit of small time steps. We use a newly developed method for Langevin simulations to measure the probability distribution of a particle diffusing in a flat potential. Our results reveal that both the Itô and Stratonovich interpretations converge very slowly to the uniform equilibrium distribution for vanishing time step sizes. Three other conventions exhibit significantly improved accuracy: (i) the "isothermal" (Hänggi) convention, (ii) the Stratonovich convention corrected by a drift term, and (iii) a newly proposed convention employing two different effective friction coefficients representing two different averages of the friction function during the time step. We argue that the most physically accurate dynamical description is provided by the third convention, in which the particle experiences a drift originating from the dissipation instead of the fluctuation term. This feature is directly related to the fact that the drift is a result of an inertial effect that cannot be well understood in the Brownian, overdamped limit of the Langevin equation.

  3. Molecular dynamics with rigid bodies: Alternative formulation and assessment of its limitations when employed to simulate liquid water

    NASA Astrophysics Data System (ADS)

    Silveira, Ana J.; Abreu, Charlles R. A.

    2017-09-01

    Sets of atoms collectively behaving as rigid bodies are often used in molecular dynamics to model entire molecules or parts thereof. This is a coarse-graining strategy that eliminates degrees of freedom and supposedly admits larger time steps without abandoning the atomistic character of a model. In this paper, we rely on a particular factorization of the rotation matrix to simplify the mechanical formulation of systems containing rigid bodies. We then propose a new derivation for the exact solution of torque-free rotations, which are employed as part of a symplectic numerical integration scheme for rigid-body dynamics. We also review methods for calculating pressure in systems of rigid bodies with pairwise-additive potentials and periodic boundary conditions. Finally, simulations of liquid phases, with special focus on water, are employed to analyze the numerical aspects of the proposed methodology. Our results show that energy drift is avoided for time step sizes up to 5 fs, but only if a proper smoothing is applied to the interatomic potentials. Despite this, the effects of discretization errors are relevant, even for smaller time steps. These errors induce, for instance, a systematic failure of the expected equipartition of kinetic energy between translational and rotational degrees of freedom.

  4. Time-interval for integration of stabilizing haptic and visual information in subjects balancing under static and dynamic conditions

    PubMed Central

    Honeine, Jean-Louis; Schieppati, Marco

    2014-01-01

    Maintaining equilibrium is basically a sensorimotor integration task. The central nervous system (CNS) continually and selectively weights and rapidly integrates sensory inputs from multiple sources, and coordinates multiple outputs. The weighting process is based on the availability and accuracy of afferent signals at a given instant, on the time-period required to process each input, and possibly on the plasticity of the relevant pathways. The likelihood that sensory inflow changes while balancing under static or dynamic conditions is high, because subjects can pass from a dark to a well-lit environment or from a tactile-guided stabilization to loss of haptic inflow. This review article presents recent data on the temporal events accompanying sensory transition, on which basic information is fragmentary. The processing time from sensory shift to reaching a new steady state includes the time to (a) subtract or integrate sensory inputs; (b) move from allocentric to egocentric reference or vice versa; and (c) adjust the calibration of motor activity in time and amplitude to the new sensory set. We present examples of processes of integration of posture-stabilizing information, and of the respective sensorimotor time-intervals while allowing or occluding vision or adding or subtracting tactile information. These intervals are short, in the order of 1–2 s for different postural conditions, modalities and deliberate or passive shift. They are just longer for haptic than visual shift, just shorter on withdrawal than on addition of stabilizing input, and on deliberate than unexpected mode. The delays are the shortest (for haptic shift) in blind subjects. Since automatic balance stabilization may be vulnerable to sensory-integration delays and to interference from concurrent cognitive tasks in patients with sensorimotor problems, insight into the processing time for balance control represents a critical step in the design of new balance- and locomotion training devices. PMID:25339872

  5. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    PubMed

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.

  6. Integral equation methods for vesicle electrohydrodynamics in three dimensions

    NASA Astrophysics Data System (ADS)

    Veerapaneni, Shravan

    2016-12-01

    In this paper, we develop a new boundary integral equation formulation that describes the coupled electro- and hydro-dynamics of a vesicle suspended in a viscous fluid and subjected to external flow and electric fields. The dynamics of the vesicle are characterized by a competition between the elastic, electric and viscous forces on its membrane. The classical Taylor-Melcher leaky-dielectric model is employed for the electric response of the vesicle and the Helfrich energy model combined with local inextensibility is employed for its elastic response. The coupled governing equations for the vesicle position and its transmembrane electric potential are solved using a numerical method that is spectrally accurate in space and first-order in time. The method uses a semi-implicit time-stepping scheme to overcome the numerical stiffness associated with the governing equations.

  7. Integrated system for the destruction of organics by hydrolysis and oxidation with peroxydisulfate

    DOEpatents

    Cooper, John F.; Balazs, G. Bryan; Hsu, Peter; Lewis, Patricia R.; Adamson, Martyn G.

    2000-01-01

    An integrated system for destruction of organic waste comprises a hydrolysis step at moderate temperature and pressure, followed by direct chemical oxidation using peroxydisulfate. This system can be used to quantitatively destroy volatile or water-insoluble halogenated organic solvents, contaminated soils and sludges, and the organic component of mixed waste. The hydrolysis step results in a substantially single phase of less volatile, more water soluble hydrolysis products, thus enabling the oxidation step to proceed rapidly and with minimal loss of organic substrate in the off-gas.

  8. Promoting research integrity in the geosciences

    NASA Astrophysics Data System (ADS)

    Mayer, Tony

    2015-04-01

    Conducting research in a responsible manner in compliance with codes of research integrity is essential. The geosciences, as with all other areas of research endeavour, has its fair share of misconduct cases and causes celebres. As research becomes more global, more collaborative and more cross-disciplinary, the need for all concerned to work to the same high standards becomes imperative. Modern technology makes it far easier to 'cut and paste', to use Photoshop to manipulate imagery to falsify results at the same time as making research easier and more meaningful. So we need to promote the highest standards of research integrity and the responsible conduct of research. While ultimately, responsibility for misconduct rests with the individual, institutions and the academic research system have to take steps to alleviate the pressure on researchers and promote good practice through training programmes and mentoring. The role of the World Conferences on Research Integrity in promoting the importance of research integrity and statements about good practice will be presented and the need for training and mentoring programmes will be discussed

  9. Analyzing Dynamics of Cooperating Spacecraft

    NASA Technical Reports Server (NTRS)

    Hughes, Stephen P.; Folta, David C.; Conway, Darrel J.

    2004-01-01

    A software library has been developed to enable high-fidelity computational simulation of the dynamics of multiple spacecraft distributed over a region of outer space and acting with a common purpose. All of the modeling capabilities afforded by this software are available independently in other, separate software systems, but have not previously been brought together in a single system. A user can choose among several dynamical models, many high-fidelity environment models, and several numerical-integration schemes. The user can select whether to use models that assume weak coupling between spacecraft, or strong coupling in the case of feedback control or tethering of spacecraft to each other. For weak coupling, spacecraft orbits are propagated independently, and are synchronized in time by controlling the step size of the integration. For strong coupling, the orbits are integrated simultaneously. Among the integration schemes that the user can choose are Runge-Kutta Verner, Prince-Dormand, Adams-Bashforth-Moulton, and Bulirsh- Stoer. Comparisons of performance are included for both the weak- and strongcoupling dynamical models for all of the numerical integrators.

  10. Marcus canonical integral for non-Gaussian processes and its computation: pathwise simulation and tau-leaping algorithm.

    PubMed

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2013-03-14

    The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.

  11. Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.

  12. A qualitative evaluation of a physician-delivered pedometer-based step count prescription strategy with insight from participants and treating physicians.

    PubMed

    Cooke, Alexandra B; Pace, Romina; Chan, Deborah; Rosenberg, Ellen; Dasgupta, Kaberi; Daskalopoulou, Stella S

    2018-05-01

    The integration of pedometers into clinical practice has the potential to enhance physical activity levels in patients with chronic disease. Our SMARTER randomized controlled trial demonstrated that a physician-delivered step count prescription strategy has measurable effects on daily steps, glycemic control, and insulin resistance in patients with type 2 diabetes and/or hypertension. In this study, we aimed to understand perceived barriers and facilitators influencing successful uptake and sustainability of the strategy, from patient and physician perspectives. Qualitative in-depth interviews were conducted in a purposive sample of physicians (n = 10) and participants (n = 20), including successful and less successful cases in terms of pedometer-assessed step count improvements. Themes that achieved saturation in either group through thematic analysis are presented. All participants appreciated the pedometer-based monitoring combined with step count prescriptions. Accountability to physicians and support offered by the trial coordinator influenced participant motivation. Those who increased step counts adopted strategies to integrate more steps into their routines and were able to overcome weather-related barriers by finding indoor alternative options to outdoor steps. Those who decreased step counts reported difficulty in overcoming weather-related challenges, health limitations and work constraints. Physicians indicated the strategy provided a framework for discussing physical activity and motivating patients, but emphasized the need for support from allied professionals to help deliver the strategy in busy clinical settings. A physician-delivered step count prescription strategy was feasibly integrated into clinical practice and successful in engaging most patients; however, continual support is needed for maximal engagement and sustained use. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Stair ascent with an innovative microprocessor-controlled exoprosthetic knee joint.

    PubMed

    Bellmann, Malte; Schmalz, Thomas; Ludwigs, Eva; Blumentritt, Siegmar

    2012-12-01

    Climbing stairs can pose a major challenge for above-knee amputees as a result of compromised motor performance and limitations to prosthetic design. A new, innovative microprocessor-controlled prosthetic knee joint, the Genium, incorporates a function that allows an above-knee amputee to climb stairs step over step. To execute this function, a number of different sensors and complex switching algorithms were integrated into the prosthetic knee joint. The function is intuitive for the user. A biomechanical study was conducted to assess objective gait measurements and calculate joint kinematics and kinetics as subjects ascended stairs. Results demonstrated that climbing stairs step over step is more biomechanically efficient for an amputee using the Genium prosthetic knee than the previously possible conventional method where the extended prosthesis is trailed as the amputee executes one or two steps at a time. There is a natural amount of stress on the residual musculoskeletal system, and it has been shown that the healthy contralateral side supports the movements of the amputated side. The mechanical power that the healthy contralateral knee joint needs to generate during the extension phase is also reduced. Similarly, there is near normal loading of the hip joint on the amputated side.

  14. The integrated simulation and assessment of the impacts of process change in biotherapeutic antibody production.

    PubMed

    Chhatre, Sunil; Jones, Carl; Francis, Richard; O'Donovan, Kieran; Titchener-Hooker, Nigel; Newcombe, Anthony; Keshavarz-Moore, Eli

    2006-01-01

    Growing commercial pressures in the pharmaceutical industry are establishing a need for robust computer simulations of whole bioprocesses to allow rapid prediction of the effects of changes made to manufacturing operations. This paper presents an integrated process simulation that models the cGMP manufacture of the FDA-approved biotherapeutic CroFab, an IgG fragment used to treat rattlesnake envenomation (Protherics U.K. Limited, Blaenwaun, Ffostrasol, Llandysul, Wales, U.K.). Initially, the product is isolated from ovine serum by precipitation and centrifugation, before enzymatic digestion of the IgG to produce FAB and FC fragments. These are purified by ion exchange and affinity chromatography to remove the FC and non-specific FAB fragments from the final venom-specific FAB product. The model was constructed in a discrete event simulation environment and used to determine the potential impact of a series of changes to the process, such as increasing the step efficiencies or volumes of chromatographic matrices, upon product yields and process times. The study indicated that the overall FAB yield was particularly sensitive to changes in the digestive and affinity chromatographic step efficiencies, which have a predicted 30% greater impact on process FAB yield than do the precipitation or centrifugation stages. The study showed that increasing the volume of affinity matrix has a negligible impact upon total process time. Although results such as these would require experimental verification within the physical constraints of the process and the facility, the model predictions are still useful in allowing rapid "what-if" scenario analysis of the likely impacts of process changes within such an integrated production process.

  15. Integration of InSAR and GIS in the Study of Surface Faults Caused by Subsidence-Creep-Fault Processes in Celaya, Guanajuato, Mexico

    NASA Astrophysics Data System (ADS)

    Avila-Olivera, Jorge A.; Farina, Paolo; Garduño-Monroy, Victor H.

    2008-05-01

    In Celaya city, Subsidence-Creep-Fault Processes (SCFP) began to become visible at the beginning of the 1980s with the sprouting of the crackings that gave rise to the surface faults "Oriente" and "Poniente". At the present time, the city is being affected by five surface faults that display a preferential NNW-SSE direction, parallel to the regional faulting system "Taxco-San Miguel de Allende". In order to study the SCFP in the city, the first step was to obtain a map of surface faults, by integrating in a GIS field survey and an urban city plan. The following step was to create a map of the current phreatic level decline in city with the information of deep wells and using the "kriging" method in order to obtain a continuous surface. Finally the interferograms maps resulted of an InSAR analysis of 9 SAR images covering the time interval between July 12 of 2003 and May 27 of 2006 were integrated to a GIS. All the maps generated, show how the surface faults divide the city from North to South, in two zones that behave in a different way. The difference of the phreatic level decline between these two zones is 60 m; and the InSAR study revealed that the Western zone practically remains stable, while sinkings between the surface faults "Oriente" and "Universidad Pedagógica" are present, as well as in portions NE and SE of the city, all of these sinkings between 7 and 10 cm/year.

  16. Integration of InSAR and GIS in the Study of Surface Faults Caused by Subsidence-Creep-Fault Processes in Celaya, Guanajuato, Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila-Olivera, Jorge A.; Instituto de Investigaciones Metalurgicas, Universidad Michoacana de San Nicolas de Hidalgo, C.U., 58030 Morelia, Michoacan; Farina, Paolo

    2008-05-07

    In Celaya city, Subsidence-Creep-Fault Processes (SCFP) began to become visible at the beginning of the 1980s with the sprouting of the crackings that gave rise to the surface faults 'Oriente' and 'Poniente'. At the present time, the city is being affected by five surface faults that display a preferential NNW-SSE direction, parallel to the regional faulting system 'Taxco-San Miguel de Allende'. In order to study the SCFP in the city, the first step was to obtain a map of surface faults, by integrating in a GIS field survey and an urban city plan. The following step was to create amore » map of the current phreatic level decline in city with the information of deep wells and using the 'kriging' method in order to obtain a continuous surface. Finally the interferograms maps resulted of an InSAR analysis of 9 SAR images covering the time interval between July 12 of 2003 and May 27 of 2006 were integrated to a GIS. All the maps generated, show how the surface faults divide the city from North to South, in two zones that behave in a different way. The difference of the phreatic level decline between these two zones is 60 m; and the InSAR study revealed that the Western zone practically remains stable, while sinkings between the surface faults 'Oriente' and 'Universidad Pedagogica' are present, as well as in portions NE and SE of the city, all of these sinkings between 7 and 10 cm/year.« less

  17. The Chimera II Real-Time Operating System for advanced sensor-based control applications

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1992-01-01

    Attention is given to the Chimera II Real-Time Operating System, which has been developed for advanced sensor-based control applications. The Chimera II provides a high-performance real-time kernel and a variety of IPC features. The hardware platform required to run Chimera II consists of commercially available hardware, and allows custom hardware to be easily integrated. The design allows it to be used with almost any type of VMEbus-based processors and devices. It allows radially differing hardware to be programmed using a common system, thus providing a first and necessary step towards the standardization of reconfigurable systems that results in a reduction of development time and cost.

  18. Bidding-based autonomous process planning and scheduling

    NASA Astrophysics Data System (ADS)

    Gu, Peihua; Balasubramanian, Sivaram; Norrie, Douglas H.

    1995-08-01

    Improving productivity through computer integrated manufacturing systems (CIMS) and concurrent engineering requires that the islands of automation in an enterprise be completely integrated. The first step in this direction is to integrate design, process planning, and scheduling. This can be achieved through a bidding-based process planning approach. The product is represented in a STEP model with detailed design and administrative information including design specifications, batch size, and due dates. Upon arrival at the manufacturing facility, the product registered in the shop floor manager which is essentially a coordinating agent. The shop floor manager broadcasts the product's requirements to the machines. The shop contains autonomous machines that have knowledge about their functionality, capabilities, tooling, and schedule. Each machine has its own process planner and responds to the product's request in a different way that is consistent with its capabilities and capacities. When more than one machine offers certain process(es) for the same requirements, they enter into negotiation. Based on processing time, due date, and cost, one of the machines wins the contract. The successful machine updates its schedule and advises the product to request raw material for processing. The concept was implemented using a multi-agent system with the task decomposition and planning achieved through contract nets. The examples are included to illustrate the approach.

  19. An integrated approach to evaluating the economic costs of wildfire hazard reduction through wood utilization opportunities in the southwestern United States

    Treesearch

    Eini C. Lowell; Dennis R. Becker; Robert Rummer; Debra Larson; Linda Wadleigh

    2008-01-01

    This research provides an important step in the conceptualization and development of an integrated wildfire fuels reduction system from silvicultural prescription, through stem selection, harvesting, in-woods processing, transport, and market selection. Decisions made at each functional step are informed by knowledge about subsequent functions. Data on the resource...

  20. An integrated approach to evaluating the economic costs of wildfire hazard reduction through wood utilization opportunities in the Southwestern United States

    Treesearch

    Eini C. Lowell; Dennis R. Becker; Robert Rummer; Debra Larson; Linda Wadleigh

    2008-01-01

    This research provides an important step in the conceptualization and development of an integrated wildfire fuels reduction system from silvicultural prescription, through stem selection, harvesting, in-woods processing, transport, and market selection. Decisions made at each functional step are informed by knowledge about subsequent functions. Data on the resource...

  1. Review of Integrated behavioral health in primary care: Step-by-step guidance for assessment and intervention (Second edition).

    PubMed

    Ogbeide, Stacy A

    2017-09-01

    Reviews the book, Integrated Behavioral Health in Primary Care: Step-By-Step Guidance for Assessment and Intervention (Second Edition) by Anne C. Dobmeyer, Mark S. Oordt, Jeffrey L. Goodie, and Christopher L. Hunter (see record 2016-59132-000). This comprehensive book is well organized and covers many of the complex issues faced within the Primary Care Behavioral Health (PCBH) model and primary care setting: from uncontrolled type II diabetes to posttraumatic stress disorder. Primary care has changed since the initial release of this book, and the second edition covers many of these changes with up-to-date literature such as population health and the patient-centered medical home. The book is organized into three parts. The first three chapters describe the foundation of integrated behavioral consultation services. The next 12 chapters address common behavioral health issues that present in primary care. Last, the final two chapters focus on special topics such suicidal behavior and designing clinical pathways. This was an enjoyable read and worth the investment- especially if you are a trainee or a seasoned professional new to the practice of integrated behavioral health in primary care. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. DYCAST: A finite element program for the crash analysis of structures

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Winter, R.; Ogilvie, P.

    1987-01-01

    DYCAST is a nonlinear structural dynamic finite element computer code developed for crash simulation. The element library contains stringers, beams, membrane skin triangles, plate bending triangles and spring elements. Changing stiffnesses in the structure are accounted for by plasticity and very large deflections. Material nonlinearities are accommodated by one of three options: elastic-perfectly plastic, elastic-linear hardening plastic, or elastic-nonlinear hardening plastic of the Ramberg-Osgood type. Geometric nonlinearities are handled in an updated Lagrangian formulation by reforming the structure into its deformed shape after small time increments while accumulating deformations, strains, and forces. The nonlinearities due to combined loadings are maintained, and stiffness variation due to structural failures are computed. Numerical time integrators available are fixed-step central difference, modified Adams, Newmark-beta, and Wilson-theta. The last three have a variable time step capability, which is controlled internally by a solution convergence error measure. Other features include: multiple time-load history tables to subject the structure to time dependent loading; gravity loading; initial pitch, roll, yaw, and translation of the structural model with respect to the global system; a bandwidth optimizer as a pre-processor; and deformed plots and graphics as post-processors.

  3. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bylaska, Eric J., E-mail: Eric.Bylaska@pnnl.gov; Weare, Jonathan Q., E-mail: weare@uchicago.edu; Weare, John H., E-mail: jweare@ucsd.edu

    2013-08-21

    Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time t{sub i} (trajectory positions and velocities x{sub i} = (r{sub i}, v{sub i})) to time t{sub i+1} (x{sub i+1}) by x{sub i+1} = f{sub i}(x{sub i}), the dynamics problem spanning an interval from t{sub 0}…t{sub M} can be transformed into a root finding problem, F(X) = [x{sub i} − f(x{sub (i−1})]{sub i} {sub =1,M} = 0, for themore » trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H{sub 2}O AIMD simulation at the MP2 level. The maximum speedup ((serial execution time)/(parallel execution time) ) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H{sub 2}O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.« less

  4. AN INTEGRATED PERSPECTIVE ON THE ASSESSMENT OF TECHNOLOGIES: INTEGRATE-HTA.

    PubMed

    Wahlster, Philip; Brereton, Louise; Burns, Jacob; Hofmann, Björn; Mozygemba, Kati; Oortwijn, Wija; Pfadenhauer, Lisa; Polus, Stephanie; Rehfuess, Eva; Schilling, Imke; van der Wilt, Gert Jan; Gerhardus, Ansgar

    2017-01-01

    Current health technology assessment (HTA) is not well equipped to assess complex technologies as insufficient attention is being paid to the diversity in patient characteristics and preferences, context, and implementation. Strategies to integrate these and several other aspects, such as ethical considerations, in a comprehensive assessment are missing. The aim of the European research project INTEGRATE-HTA was to develop a model for an integrated HTA of complex technologies. A multi-method, four-stage approach guided the development of the INTEGRATE-HTA Model: (i) definition of the different dimensions of information to be integrated, (ii) literature review of existing methods for integration, (iii) adjustment of concepts and methods for assessing distinct aspects of complex technologies in the frame of an integrated process, and (iv) application of the model in a case study and subsequent revisions. The INTEGRATE-HTA Model consists of five steps, each involving stakeholders: (i) definition of the technology and the objective of the HTA; (ii) development of a logic model to provide a structured overview of the technology and the system in which it is embedded; (iii) evidence assessment on effectiveness, economic, ethical, legal, and socio-cultural aspects, taking variability of participants, context, implementation issues, and their interactions into account; (iv) populating the logic model with the data generated in step 3; (v) structured process of decision-making. The INTEGRATE-HTA Model provides a structured process for integrated HTAs of complex technologies. Stakeholder involvement in all steps is essential as a means of ensuring relevance and meaningful interpretation of the evidence.

  5. An Assessment of IMPAC - Integrated Methodology for Propulsion and Airframe Controls

    NASA Technical Reports Server (NTRS)

    Walker, G. P.; Wagner, E. A.; Bodden, D. S.

    1996-01-01

    This report documents the work done under a NASA sponsored contract to transition to industry technologies developed under the NASA Lewis Research Center IMPAC (Integrated Methodology for Propulsion and Airframe Control) program. The critical steps in IMPAC are exercised on an example integrated flight/propulsion control design for linear airframe/engine models of a conceptual STOVL (Short Take-Off and Vertical Landing) aircraft, and MATRIXX (TM) executive files to implement each step are developed. The results from the example study are analyzed and lessons learned are listed along with recommendations that will improve the application of each design step. The end product of this research is a set of software requirements for developing a user-friendly control design tool which will automate the steps in the IMPAC methodology. Prototypes for a graphical user interface (GUI) are sketched to specify how the tool will interact with the user, and it is recommended to build the tool around existing computer aided control design software packages.

  6. High-speed extended-term time-domain simulation for online cascading analysis of power system

    NASA Astrophysics Data System (ADS)

    Fu, Chuan

    A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.

  7. Magnetic Resonance Imaging-Guided Adaptive Radiation Therapy: A "Game Changer" for Prostate Treatment?

    PubMed

    Pathmanathan, Angela U; van As, Nicholas J; Kerkmeijer, Linda G W; Christodouleas, John; Lawton, Colleen A F; Vesprini, Danny; van der Heide, Uulke A; Frank, Steven J; Nill, Simeon; Oelfke, Uwe; van Herk, Marcel; Li, X Allen; Mittauer, Kathryn; Ritter, Mark; Choudhury, Ananya; Tree, Alison C

    2018-02-01

    Radiation therapy to the prostate involves increasingly sophisticated delivery techniques and changing fractionation schedules. With a low estimated α/β ratio, a larger dose per fraction would be beneficial, with moderate fractionation schedules rapidly becoming a standard of care. The integration of a magnetic resonance imaging (MRI) scanner and linear accelerator allows for accurate soft tissue tracking with the capacity to replan for the anatomy of the day. Extreme hypofractionation schedules become a possibility using the potentially automated steps of autosegmentation, MRI-only workflow, and real-time adaptive planning. The present report reviews the steps involved in hypofractionated adaptive MRI-guided prostate radiation therapy and addresses the challenges for implementation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Improvements of the particle-in-cell code EUTERPE for petascaling machines

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Kleiber, Ralf; Castejón, Francisco; Cela, José M.

    2011-09-01

    In the present work we report some performance measures and computational improvements recently carried out using the gyrokinetic code EUTERPE (Jost, 2000 [1] and Jost et al., 1999 [2]), which is based on the general particle-in-cell (PIC) method. The scalability of the code has been studied for up to sixty thousand processing elements and some steps towards a complete hybridization of the code were made. As a numerical example, non-linear simulations of Ion Temperature Gradient (ITG) instabilities have been carried out in screw-pinch geometry and the results are compared with earlier works. A parametric study of the influence of variables (step size of the time integrator, number of markers, grid size) on the quality of the simulation is presented.

  9. Single-step methods for predicting orbital motion considering its periodic components

    NASA Astrophysics Data System (ADS)

    Lavrov, K. N.

    1989-01-01

    Modern numerical methods for integration of ordinary differential equations can provide accurate and universal solutions to celestial mechanics problems. The implicit single sequence algorithms of Everhart and multiple step computational schemes using a priori information on periodic components can be combined to construct implicit single sequence algorithms which combine their advantages. The construction and analysis of the properties of such algorithms are studied, utilizing trigonometric approximation of the solutions of differential equations containing periodic components. The algorithms require 10 percent more machine memory than the Everhart algorithms, but are twice as fast, and yield short term predictions valid for five to ten orbits with good accuracy and five to six times faster than algorithms using other methods.

  10. The people side of MRP (materiel requirements planning).

    PubMed

    Lunn, T

    1994-05-01

    A montage of ideas and concepts have been successfully used to train and motivate people to use MRP II systems more effectively. This is important today because many companies are striving to achieve World Class Manufacturing status. Closed loop Materiel Requirements Planning (MRP) systems are an integral part of the process of continuous improvement. Successfully using a formal management planning system, such as MRP II, is a fundamental stepping stone on the path toward World Class Excellence. Included in this article are techniques that companies use to reduce lead time, simplify bills of materiel, and improve schedule adherence. These and other steps all depend on the people who use the system. The focus will be on how companies use the MRP tool more effectively.

  11. Machine remaining useful life prediction: An integrated adaptive neuro-fuzzy and high-order particle filtering approach

    NASA Astrophysics Data System (ADS)

    Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.

    2012-04-01

    Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.

  12. Real-time dynamics of typical and untypical states in nonintegrable systems

    NASA Astrophysics Data System (ADS)

    Richter, Jonas; Jin, Fengping; De Raedt, Hans; Michielsen, Kristel; Gemmer, Jochen; Steinigeweg, Robin

    2018-05-01

    Understanding (i) the emergence of diffusion from truly microscopic principles continues to be a major challenge in experimental and theoretical physics. At the same time, isolated quantum many-body systems have experienced an upsurge of interest in recent years. Since in such systems the realization of a proper initial state is the only possibility to induce a nonequilibrium process, understanding (ii) the largely unexplored role of the specific realization is vitally important. Our work reports a substantial step forward and tackles the two issues (i) and (ii) in the context of typicality, entanglement as well as integrability and nonintegrability. Specifically, we consider the spin-1/2 XXZ chain, where integrability can be broken due to an additional next-nearest neighbor interaction, and study the real-time and real-space dynamics of nonequilibrium magnetization profiles for a class of pure states. Summarizing our main results, we show that signatures of diffusion for strong interactions are equally pronounced for the integrable and nonintegrable case. In both cases, we further find a clear difference between the dynamics of states with and without internal randomness. We provide an explanation of this difference by a detailed analysis of the local density of states.

  13. Coarse-grained representation of the quasi adiabatic propagator path integral for the treatment of non-Markovian long-time bath memory

    NASA Astrophysics Data System (ADS)

    Richter, Martin; Fingerhut, Benjamin P.

    2017-06-01

    The description of non-Markovian effects imposed by low frequency bath modes poses a persistent challenge for path integral based approaches like the iterative quasi-adiabatic propagator path integral (iQUAPI) method. We present a novel approximate method, termed mask assisted coarse graining of influence coefficients (MACGIC)-iQUAPI, that offers appealing computational savings due to substantial reduction of considered path segments for propagation. The method relies on an efficient path segment merging procedure via an intermediate coarse grained representation of Feynman-Vernon influence coefficients that exploits physical properties of system decoherence. The MACGIC-iQUAPI method allows us to access the regime of biological significant long-time bath memory on the order of hundred propagation time steps while retaining convergence to iQUAPI results. Numerical performance is demonstrated for a set of benchmark problems that cover bath assisted long range electron transfer, the transition from coherent to incoherent dynamics in a prototypical molecular dimer and excitation energy transfer in a 24-state model of the Fenna-Matthews-Olson trimer complex where in all cases excellent agreement with numerically exact reference data is obtained.

  14. Super-resolution imaging applied to moving object tracking

    NASA Astrophysics Data System (ADS)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  15. Predictors of longitudinal substance use and mental health outcomes for patients in two integrated service delivery systems.

    PubMed

    Grella, Christine E; Stein, Judith A; Weisner, Constance; Chi, Felicia; Moos, Rudolf

    2010-07-01

    Individuals who have both substance use disorders and mental health problems have poorer treatment outcomes. This study examines the relationship of service utilization and 12-step participation to outcomes at 1 and 5 years for patients treated in one of two integrated service delivery systems: the Department of Veterans Affairs (VA) system and a health maintenance organization (HMO). Sub-samples from each system were selected using multiple criteria indicating severity of mental health problems at admission to substance use disorder treatment (VA=401; HMO=331). Separate and multiple group structural equation model analyses used baseline characteristics, service use, and 12-step participation as predictors of substance use and mental health outcomes at 1 and 5 years following admission. Substance use and related problems showed stability across time, however, these relationships were stronger among VA patients. More continuing care substance use outpatient visits were associated with reductions in mental health symptoms in both groups, whereas receipt of outpatient mental health services was associated with more severe psychological symptoms. Participation in 12-step groups had a stronger effect on reducing cocaine use among VA patients, whereas it had a stronger effect on reducing alcohol use among HMO patients. More outpatient psychological services had a stronger effect on reducing alcohol use among HMO patients. Common findings across these two systems demonstrate the persistence of substance use and related psychological problems, but also show that continuing care services and participation in 12-step groups are associated with better outcomes in both systems. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  16. Ethanol and other oxygenateds from low grade carbonaceous resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joo, O.S.; Jung, K.D.; Han, S.H.

    1995-12-31

    Anhydrous ethanol and other oxygenates of C2 up can be produced quite competitively from low grade carbonaceous resources in high yield via gasification, methanol synthesis, carbonylation of methanol an hydrogenation consecutively. Gas phase carbonylation of methanol to form methyl acetate is the key step for the whole process. Methyl acetate can be produced very selectively in one step gas phase reaction on a fixed bed column reactor with GHSV over 5,000. The consecutive hydrogenation of methyl or ethyl acetate produce anhydrous ethanol in high purity. It is also attempted to co-produce methanol and DME in IGCC, in which low grademore » carbonaceous resources are used as energy sources, and the surplus power and pre-power gas can be stored in liquid form of methanol and DME during base load time. Further integration of C2 up oxygenate production with IGCC can improve its economics. The attempt of above extensive technology integration can generate significant industrial profitability as well as reduce the environmental complication related with massive energy consumption.« less

  17. Integrated system for single leg walking

    NASA Astrophysics Data System (ADS)

    Simmons, Reid; Krotkov, Eric; Roston, Gerry

    1990-07-01

    The Carnegie Mellon University Planetary Rover project is developing a six-legged walking robot capable of autonomously navigating, exploring, and acquiring samples in rugged, unknown environments. This report describes an integrated software system capable of navigating a single leg of the robot over rugged terrain. The leg, based on an early design of the Ambler Planetary Rover, is suspended below a carriage that slides along rails. To walk, the system creates an elevation map of the terrain from laser scanner images, plans an appropriate foothold based on terrain and geometric constraints, weaves the leg through the terrain to position it above the foothold, contacts the terrain with the foot, and applies force enough to advance the carriage along the rails. Walking both forward and backward, the system has traversed hundreds of meters of rugged terrain including obstacles too tall to step over, trenches too deep to step in, closely spaced obstacles, and sand hills. The implemented system consists of a number of task-specific processes (two for planning, two for perception, one for real-time control) and a central control process that directs the flow of communication between processes.

  18. Machine learning of atmospheric chemistry. Applications to a global chemistry transport model.

    NASA Astrophysics Data System (ADS)

    Evans, M. J.; Keller, C. A.

    2017-12-01

    Atmospheric chemistry is central to many environmental issues such as air pollution, climate change, and stratospheric ozone loss. Chemistry Transport Models (CTM) are a central tool for understanding these issues, whether for research or for forecasting. These models split the atmosphere in a large number of grid-boxes and consider the emission of compounds into these boxes and their subsequent transport, deposition, and chemical processing. The chemistry is represented through a series of simultaneous ordinary differential equations, one for each compound. Given the difference in life-times between the chemical compounds (mili-seconds for O(1D) to years for CH4) these equations are numerically stiff and solving them consists of a significant fraction of the computational burden of a CTM.We have investigated a machine learning approach to solving the differential equations instead of solving them numerically. From an annual simulation of the GEOS-Chem model we have produced a training dataset consisting of the concentration of compounds before and after the differential equations are solved, together with some key physical parameters for every grid-box and time-step. From this dataset we have trained a machine learning algorithm (random regression forest) to be able to predict the concentration of the compounds after the integration step based on the concentrations and physical state at the beginning of the time step. We have then included this algorithm back into the GEOS-Chem model, bypassing the need to integrate the chemistry.This machine learning approach shows many of the characteristics of the full simulation and has the potential to be substantially faster. There are a wide range of application for such an approach - generating boundary conditions, for use in air quality forecasts, chemical data assimilation systems, centennial scale climate simulations etc. We discuss our approches' speed and accuracy, and highlight some potential future directions for improving this approach.

  19. Numerical experiment for ultrasonic-measurement-integrated simulation of three-dimensional unsteady blood flow.

    PubMed

    Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki

    2008-08-01

    Integration of ultrasonic measurement and numerical simulation is a possible way to break through limitations of existing methods for obtaining complete information on hemodynamics. We herein propose Ultrasonic-Measurement-Integrated (UMI) simulation, in which feedback signals based on the optimal estimation of errors in the velocity vector determined by measured and computed Doppler velocities at feedback points are added to the governing equations. With an eye towards practical implementation of UMI simulation with real measurement data, its efficiency for three-dimensional unsteady blood flow analysis and a method for treating low time resolution of ultrasonic measurement were investigated by a numerical experiment dealing with complicated blood flow in an aneurysm. Even when simplified boundary conditions were applied, the UMI simulation reduced the errors of velocity and pressure to 31% and 53% in the feedback domain which covered the aneurysm, respectively. Local maximum wall shear stress was estimated, showing both the proper position and the value with 1% deviance. A properly designed intermittent feedback applied only at the time when measurement data were obtained had the same computational accuracy as feedback applied at every computational time step. Hence, this feedback method is a possible solution to overcome the insufficient time resolution of ultrasonic measurement.

  20. Ovarian tissue cryopreservation by stepped vitrification and monitored by X-ray computed tomography.

    PubMed

    Corral, Ariadna; Clavero, Macarena; Gallardo, Miguel; Balcerzyk, Marcin; Amorim, Christiani A; Parrado-Gallego, Ángel; Dolmans, Marie-Madeleine; Paulini, Fernanda; Morris, John; Risco, Ramón

    2018-04-01

    Ovarian tissue cryopreservation is, in most cases, the only fertility preservation option available for female patients soon to undergo gonadotoxic treatment. To date, cryopreservation of ovarian tissue has been carried out by both traditional slow freezing method and vitrification, but even with the best techniques, there is still a considerable loss of follicle viability. In this report, we investigated a stepped cryopreservation procedure which combines features of slow cooling and vitrification (hereafter called stepped vitrification). Bovine ovarian tissue was used as a tissue model. Stepwise increments of the Me 2 SO concentration coupled with stepwise drops-in temperature in a device specifically designed for this purpose and X-ray computed tomography were combined to investigate loading times at each step, by monitoring the attenuation of the radiation proportional to Me 2 SO permeation. Viability analysis was performed in warmed tissues by immunohistochemistry. Although further viability tests should be conducted after transplantation, preliminary results are very promising. Four protocols were explored. Two of them showed a poor permeation of the vitrification solution (P1 and P2). The other two (P3 and P4), with higher permeation, were studied in deeper detail. Out of these two protocols, P4, with a longer permeation time at -40 °C, showed the same histological integrity after warming as fresh controls. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. MultiDrizzle: An Integrated Pyraf Script for Registering, Cleaning and Combining Images

    NASA Astrophysics Data System (ADS)

    Koekemoer, A. M.; Fruchter, A. S.; Hook, R. N.; Hack, W.

    We present the new PyRAF-based `MultiDrizzle' script, which is aimed at providing a one-step approach to combining dithered HST images. The purpose of this script is to allow easy interaction with the complex suite of tasks in the IRAF/STSDAS `dither' package, as well as the new `PyDrizzle' task, while at the same time retaining the flexibility of these tasks through a number of parameters. These parameters control the various individual steps, such as sky subtraction, image registration, `drizzling' onto separate output images, creation of a clean median image, transformation of the median with `blot' and creation of cosmic ray masks, as well as the final image combination step using `drizzle'. The default parameters of all the steps are set so that the task will work automatically for a wide variety of different types of images, while at the same time allowing adjustment of individual parameters for special cases. The script currently works for both ACS and WFPC2 data, and is now being tested on STIS and NICMOS images. We describe the operation of the script and the effect of various parameters, particularly in the context of combining images from dithered observations using ACS and WFPC2. Additional information is also available at the `MultiDrizzle' home page: http://www.stsci.edu/~koekemoe/multidrizzle/

  2. From classical to quantum and back: Hamiltonian adaptive resolution path integral, ring polymer, and centroid molecular dynamics

    NASA Astrophysics Data System (ADS)

    Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.

    2017-12-01

    Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.

  3. On Everhart Method

    NASA Astrophysics Data System (ADS)

    Pârv, Bazil

    This paper deals with the Everhart numerical integration method, a well-known method in astronomical research. This method, a single-step one, is widely used for numerical integration of motion equation of celestial bodies. For an integration step, this method uses unequally-spaced substeps, defined by the roots of the so-called generating polynomial of Everhart's method. For this polynomial, this paper proposes and proves new recurrence formulae. The Maple computer algebra system was used to find and prove these formulae. Again, Maple seems to be well suited and easy to use in mathematical research.

  4. Testing a simplified method for measuring velocity integration in saccades using a manipulation of target contrast.

    PubMed

    Etchells, Peter J; Benton, Christopher P; Ludwig, Casimir J H; Gilchrist, Iain D

    2011-01-01

    A growing number of studies in vision research employ analyses of how perturbations in visual stimuli influence behavior on single trials. Recently, we have developed a method along such lines to assess the time course over which object velocity information is extracted on a trial-by-trial basis in order to produce an accurate intercepting saccade to a moving target. Here, we present a simplified version of this methodology, and use it to investigate how changes in stimulus contrast affect the temporal velocity integration window used when generating saccades to moving targets. Observers generated saccades to one of two moving targets which were presented at high (80%) or low (7.5%) contrast. In 50% of trials, target velocity stepped up or down after a variable interval after the saccadic go signal. The extent to which the saccade endpoint can be accounted for as a weighted combination of the pre- or post-step velocities allows for identification of the temporal velocity integration window. Our results show that the temporal integration window takes longer to peak in the low when compared to high contrast condition. By enabling the assessment of how information such as changes in velocity can be used in the programming of a saccadic eye movement on single trials, this study describes and tests a novel methodology with which to look at the internal processing mechanisms that transform sensory visual inputs into oculomotor outputs.

  5. Technology of welding aluminum alloys-II

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Step-by-step procedures were developed for high integrity manual and machine welding of aluminum alloys. Detailed instructions are given for each step with tables and graphs to specify materials and dimensions. Throughout work sequence, processing procedure designates manufacturing verification points and inspection points.

  6. Integrated Marketing for Colleges, Universities, and Schools: A Step-by-Step Planning Guide.

    ERIC Educational Resources Information Center

    Sevier, Robert A.

    This book offers a step-by-step approach to marketing for educational institutions, especially colleges and universities. The book is organized into three broad sections. Section 1 makes the case for marketing in six chapters which address: (1) challenges which are or will affect colleges and universities; (2) the role of institutional mission,…

  7. Real-time, continuous, fluorescence sensing in a freely-moving subject with an implanted hybrid VCSEL/CMOS biosensor

    PubMed Central

    O’Sullivan, Thomas D.; Heitz, Roxana T.; Parashurama, Natesh; Barkin, David B.; Wooley, Bruce A.; Gambhir, Sanjiv S.; Harris, James S.; Levi, Ofer

    2013-01-01

    Performance improvements in instrumentation for optical imaging have contributed greatly to molecular imaging in living subjects. In order to advance molecular imaging in freely moving, untethered subjects, we designed a miniature vertical-cavity surface-emitting laser (VCSEL)-based biosensor measuring 1cm3 and weighing 0.7g that accurately detects both fluorophore and tumor-targeted molecular probes in small animals. We integrated a critical enabling component, a complementary metal-oxide semiconductor (CMOS) read-out integrated circuit, which digitized the fluorescence signal to achieve autofluorescence-limited sensitivity. After surgical implantation of the lightweight sensor for two weeks, we obtained continuous and dynamic fluorophore measurements while the subject was un-anesthetized and mobile. The technology demonstrated here represents a critical step in the path toward untethered optical sensing using an integrated optoelectronic implant. PMID:24009996

  8. Evolution of an experiential learning partnership in emergency management higher education.

    PubMed

    Knox, Claire Connolly; Harris, Alan S

    2016-01-01

    Experiential learning allows students to step outside the classroom and into a community setting to integrate theory with practice, while allowing the community partner to reach goals or address needs within their organization. Emergency Management and Homeland Security scholars recognize the importance, and support the increased implementation, of this pedagogical method in the higher education curriculum. Yet challenges to successful implementation exist including limited resources and time. This longitudinal study extends the literature by detailing the evolution of a partnership between a university and office of emergency management in which a functional exercise is strategically integrated into an undergraduate course. The manuscript concludes with a discussion of lessons learned from throughout the multiyear process.

  9. Gas diffusion as a new fluidic unit operation for centrifugal microfluidic platforms.

    PubMed

    Ymbern, Oriol; Sández, Natàlia; Calvo-López, Antonio; Puyol, Mar; Alonso-Chamarro, Julian

    2014-03-07

    A centrifugal microfluidic platform prototype with an integrated membrane for gas diffusion is presented for the first time. The centrifugal platform allows multiple and parallel analysis on a single disk and integrates at least ten independent microfluidic subunits, which allow both calibration and sample determination. It is constructed with a polymeric substrate material and it is designed to perform colorimetric determinations by the use of a simple miniaturized optical detection system. The determination of three different analytes, sulfur dioxide, nitrite and carbon dioxide, is carried out as a proof of concept of a versatile microfluidic system for the determination of analytes which involve a gas diffusion separation step during the analytical procedure.

  10. Energy-momentum conserving higher-order time integration of nonlinear dynamics of finite elastic fiber-reinforced continua

    NASA Astrophysics Data System (ADS)

    Erler, Norbert; Groß, Michael

    2015-05-01

    Since many years the relevance of fibre-reinforced polymers is steadily increasing in fields of engineering, especially in aircraft and automotive industry. Due to the high strength in fibre direction, but the possibility of lightweight construction, these composites replace more and more traditional materials as metals. Fibre-reinforced polymers are often manufactured from glass or carbon fibres as attachment parts or from steel or nylon cord as force transmission parts. Attachment parts are mostly subjected to small strains, but force transmission parts usually suffer large deformations in at least one direction. Here, a geometrically nonlinear formulation is necessary. Typical examples are helicopter rotor blades, where the fibres have the function to stabilize the structure in order to counteract large centrifugal forces. For long-run analyses of rotor blade deformations, we have to apply numerically stable time integrators for anisotropic materials. This paper presents higher-order accurate and numerically stable time stepping schemes for nonlinear elastic fibre-reinforced continua with anisotropic stress behaviour.

  11. Integral blow moulding for cycle time reduction of CFR-TP aluminium contour joint processing

    NASA Astrophysics Data System (ADS)

    Barfuss, Daniel; Würfel, Veit; Grützner, Raik; Gude, Maik; Müller, Roland

    2018-05-01

    Integral blow moulding (IBM) as a joining technology of carbon fibre reinforced thermoplastic (CFR-TP) hollow profiles with metallic load introduction elements enables significant cycle time reduction by shortening of the process chain. As the composite part is joined to the metallic part during its consolidation process subsequent joining steps are omitted. In combination with a multi-scale structured load introduction element its form closure function enables to pass very high loads and is capable to achieve high degrees of material utilization. This paper first shows the process set-up utilizing thermoplastic tape braided preforms and two-staged press and internal hydro formed load introduction elements. Second focuses on heating technologies and process optimization. Aiming at cycle time reduction convection and induction heating in regard to the resulting product quality is inspected by photo micrographs and computer tomographic scans. Concluding remarks give final recommendations for the process design in regard to the structural design.

  12. Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms

    PubMed Central

    Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun

    2011-01-01

    This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927

  13. FAST SIMULATION OF SOLID TUMORS THERMAL ABLATION TREATMENTS WITH A 3D REACTION DIFFUSION MODEL *

    PubMed Central

    BERTACCINI, DANIELE; CALVETTI, DANIELA

    2007-01-01

    An efficient computational method for near real-time simulation of thermal ablation of tumors via radio frequencies is proposed. Model simulations of the temperature field in a 3D portion of tissue containing the tumoral mass for different patterns of source heating can be used to design the ablation procedure. The availability of a very efficient computational scheme makes it possible update the predicted outcome of the procedure in real time. In the algorithms proposed here a discretization in space of the governing equations is followed by an adaptive time integration based on implicit multistep formulas. A modification of the ode15s MATLAB function which uses Krylov space iterative methods for the solution of for the linear systems arising at each integration step makes it possible to perform the simulations on standard desktop for much finer grids than using the built-in ode15s. The proposed algorithm can be applied to a wide class of nonlinear parabolic differential equations. PMID:17173888

  14. Simulation methods with extended stability for stiff biochemical Kinetics.

    PubMed

    Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin

    2010-08-11

    With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

  15. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects

    PubMed Central

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.

    2012-01-01

    Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108

  16. Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Ash, Robert L.

    1992-01-01

    The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.

  17. Fast and reliable symplectic integration for planetary system N-body problems

    NASA Astrophysics Data System (ADS)

    Hernandez, David M.

    2016-06-01

    We apply one of the exactly symplectic integrators, which we call HB15, of Hernandez & Bertschinger, along with the Kepler problem solver of Wisdom & Hernandez, to solve planetary system N-body problems. We compare the method to Wisdom-Holman (WH) methods in the MERCURY software package, the MERCURY switching integrator, and others and find HB15 to be the most efficient method or tied for the most efficient method in many cases. Unlike WH, HB15 solved N-body problems exhibiting close encounters with small, acceptable error, although frequent encounters slowed the code. Switching maps like MERCURY change between two methods and are not exactly symplectic. We carry out careful tests on their properties and suggest that they must be used with caution. We then use different integrators to solve a three-body problem consisting of a binary planet orbiting a star. For all tested tolerances and time steps, MERCURY unbinds the binary after 0 to 25 years. However, in the solutions of HB15, a time-symmetric HERMITE code, and a symplectic Yoshida method, the binary remains bound for >1000 years. The methods' solutions are qualitatively different, despite small errors in the first integrals in most cases. Several checks suggest that the qualitative binary behaviour of HB15's solution is correct. The Bulirsch-Stoer and Radau methods in the MERCURY package also unbind the binary before a time of 50 years, suggesting that this dynamical error is due to a MERCURY bug.

  18. The Effects of Varying Levels of Treatment Integrity on Child Compliance during Treatment with a Three-Step Prompting Procedure

    ERIC Educational Resources Information Center

    Wilder, David A.; Atwell, Julie; Wine, Byron

    2006-01-01

    The effects of three levels of treatment integrity (100%, 50%, and 0%) on child compliance were evaluated in the context of the implementation of a three-step prompting procedure. Two typically developing preschool children participated in the study. After baseline data on compliance to one of three common demands were collected, a therapist…

  19. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  20. A GPU-accelerated semi-implicit fractional step method for numerical solutions of incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Ha, Sanghyun; Park, Junshin; You, Donghyun

    2017-11-01

    Utility of the computational power of modern Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. Due to its serial and bandwidth-bound nature, the present choice of numerical methods is considered to be a good candidate for evaluating the potential of GPUs for solving Navier-Stokes equations using non-explicit time integration. An efficient algorithm is presented for GPU acceleration of the Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution method used in the semi-implicit fractional-step method. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while Navier-Stokes equations are computed on a GPU. Extension to multiple NVIDIA GPUs is implemented using NVLink supported by the Pascal architecture. Performance of the present method is experimented on multiple Tesla P100 GPUs compared with a single-core Xeon E5-2650 v4 CPU in simulations of boundary-layer flow over a flat plate. Supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (Ministry of Science, ICT and Future Planning NRF-2016R1E1A2A01939553, NRF-2014R1A2A1A11049599, and Ministry of Trade, Industry and Energy 201611101000230).

  1. Collaborations for Building Tribal Resiliency to Climate Change

    NASA Astrophysics Data System (ADS)

    Bamzai, A.; Taylor, A.; Winton, K.

    2015-12-01

    Sixty-eight tribes are located in the U.S. Department of the Interior's South Central Climate Science Center (SCCSC) region. The SCCSC made it a priority to include the tribes as partners from its inception and both the Chickasaw Nation and the Choctaw Nation of Oklahoma participate in the center's activities as consortium members. Under this arrangement, the SCCSC employs a full-time tribal liaison to facilitate relations with the tribes, develop partnerships for climate-relevant projects, build tribal stakeholder capacity, and organize tribal youth programs. In 2014, the SCCSC published its Tribal Engagement Strategy (USGS Circular 1396) to outline its approach for developing tribal relationships. The conceptual plan covers each step in the multi-year process from initial introductory meetings and outreach to demonstrate commitment and interest in working with tribal staff, building tribal capacity in climate related areas while also building researcher capacity in ethical research, and facilitating the co-production of climate-relevant research projects. As the tribes begin to develop their internal capacity and find novel ways to integrate their interests, the plan ultimately leads to tribes developing their own independent research projects and integrating climate science into their various vulnerability assessments and adaptation plans. This presentation will outline the multiple steps in the SCCSC's Tribal Engagement Strategy and provide examples of our ongoing work in support of each step.

  2. Variational methods for direct/inverse problems of atmospheric dynamics and chemistry

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena

    2013-04-01

    We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).

  3. Linking Asthma Exacerbation and Air Pollution Data: A Step Toward Public Health and Environmental Data Integration

    NASA Technical Reports Server (NTRS)

    Faruque, Fazlay; Finley, Richard; Marshall, Gailen; Brackin, Bruce; Li, Hui; Williams, Worth; Al-Hamdan, Mohammad; Luvall, Jeffrey; Rickman, Doug; Crosson, Bill

    2006-01-01

    Studies have shown that reducing exposure to triggers such as air pollutants can reduce symptoms and the need for medication in asthma patients. However, systems that track asthma are generally not integrated with those that track environmental hazards related to asthma. Tlvs lack of integration hinders public health awareness and responsiveness to these environmental triggers. The current study is a collaboration between health and environmental professionals to utilize NASA-derived environmental data to develop a decision support system (DSS) for asthma prediction, surveillance, and intervention. The investigators link asthma morbidity data from the University of Mississippi Medical Center (UMMC) and Mississippi Department of Health (MDH) with air quality data from the Mississippi Department of Environmental Quality (MDEQ) and remote sensing data from NASA. Daily ambient environmental hazard data for PM2.5 and ozone are obtained from the MDEQ air quality monitoring locations and are combined with remotely sensed data from NASA to develop a state-wide spatial and time series profile of environmental air quality. These data are then used to study the correlation of these measures of air quality variation with the asthma exacerbation incidence throughout the state over time. The goal is to utilize these readily available measures to allow real-time risk assessment for asthma exacerbations. GeoMedStat, a DSS previously developed for biosurveillance, will integrate these measures to monitor, analyze and report the real-time risk assessment for asthma exacerbation throughout the state.

  4. An on-chip coupled resonator optical waveguide single-photon buffer

    PubMed Central

    Takesue, Hiroki; Matsuda, Nobuyuki; Kuramochi, Eiichi; Munro, William J.; Notomi, Masaya

    2013-01-01

    Integrated quantum optical circuits are now seen as one of the most promising approaches with which to realize single-photon quantum information processing. Many of the core elements for such circuits have been realized, including sources, gates and detectors. However, a significant missing function necessary for photonic quantum information processing on-chip is a buffer, where single photons are stored for a short period of time to facilitate circuit synchronization. Here we report an on-chip single-photon buffer based on coupled resonator optical waveguides (CROW) consisting of 400 high-Q photonic crystal line-defect nanocavities. By using the CROW, a pulsed single photon is successfully buffered for 150 ps with 50-ps tunability while maintaining its non-classical properties. Furthermore, we show that our buffer preserves entanglement by storing and retrieving one photon from a time-bin entangled state. This is a significant step towards an all-optical integrated quantum information processor. PMID:24217422

  5. Master of Puppets: Cooperative Multitasking for In Situ Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Lukic, Zarija

    2016-01-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less

  6. Henson v1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monozov, Dmitriy; Lukie, Zarija

    2016-04-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. The developers present a novel design for running multiple codes in situ: using coroutines and position-independent executables they enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. They present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. Our design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The presented techniques can also be integrated into other in situ frameworks.« less

  7. A comparative study of Rosenbrock-type and implicit Runge-Kutta time integration for discontinuous Galerkin method for unsteady 3D compressible Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiaodong; Xia, Yidong; Luo, Hong

    A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less

  8. A comparative study of Rosenbrock-type and implicit Runge-Kutta time integration for discontinuous Galerkin method for unsteady 3D compressible Navier-Stokes equations

    DOE PAGES

    Liu, Xiaodong; Xia, Yidong; Luo, Hong; ...

    2016-10-05

    A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less

  9. Creep Life Prediction of Ceramic Components Using the Finite Element Based Integrated Design Program (CARES/Creep)

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.

    1997-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. Such long life requirements necessitate subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this work is to present a design methodology for predicting the lifetimes of structural components subjected to multiaxial creep loading. This methodology utilizes commercially available finite element packages and takes into account the time varying creep stress distributions (stress relaxation). In this methodology, the creep life of a component is divided into short time steps, during which, the stress and strain distributions are assumed constant. The damage, D, is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. For components subjected to predominantly tensile loading, failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity.

  10. Effective organics degradation from pharmaceutical wastewater by an integrated process including membrane bioreactor and ozonation.

    PubMed

    Mascolo, G; Laera, G; Pollice, A; Cassano, D; Pinto, A; Salerno, C; Lopez, A

    2010-02-01

    The enhanced removal of organic compounds from a pharmaceutical wastewater resulting from the production of an anti-viral drug (acyclovir) was obtained by employing a membrane bioreactor (MBR) and an ozonation system. An integrated MBR-ozonation system was set-up by placing the ozonation reactor in the recirculation stream of the MBR effluent. A conventional treatment set-up (ozonation as polishing step after MBR) was also used as a reference. The biological treatment alone reached an average COD removal of 99%, which remained unvaried when the ozonation step was introduced. An acyclovir removal of 99% was also obtained with the MBR step and the ozonation allowed to further remove 99% of the residual concentration in the MBR effluent. For several of the 28 organics identified in the wastewater the efficiency of the MBR treatment improved from 20% to 60% as soon as the ozonation was placed in the recirculation stream. The benefit of the integrated system, with respect to the conventional treatment set-up was evident for the removal of a specific ozonation by-product. The latter was efficiently removed in the integrated system, being its abundance in the final effluent 20-fold lower than what obtained when ozonation was used as a polishing step. In addition, if the conventional treatment configuration is employed, the same performance of the integrated system in terms of by-product removal can only be obtained when the ozonation is operated for longer than 60 min. This demonstrates the effectiveness of the integrated system compared to the conventional polishing configuration. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  11. Design and operation of a continuous integrated monoclonal antibody production process.

    PubMed

    Steinebach, Fabian; Ulmer, Nicole; Wolf, Moritz; Decker, Lara; Schneider, Veronika; Wälchli, Ruben; Karst, Daniel; Souquet, Jonathan; Morbidelli, Massimo

    2017-09-01

    The realization of an end-to-end integrated continuous lab-scale process for monoclonal antibody manufacturing is described. For this, a continuous cultivation with filter-based cell-retention, a continuous two column capture process, a virus inactivation step, a semi-continuous polishing step (twin-column MCSGP), and a batch-wise flow-through polishing step were integrated and operated together. In each unit, the implementation of internal recycle loops allows to improve the performance: (a) in the bioreactor, to simultaneously increase the cell density and volumetric productivity, (b) in the capture process, to achieve improved capacity utilization at high productivity and yield, and (c) in the MCSGP process, to overcome the purity-yield trade-off of classical batch-wise bind-elute polishing steps. Furthermore, the design principles, which allow the direct connection of these steps, some at steady state and some at cyclic steady state, as well as straight-through processing, are discussed. The setup was operated for the continuous production of a commercial monoclonal antibody, resulting in stable operation and uniform product quality over the 17 cycles of the end-to-end integration. The steady-state operation was fully characterized by analyzing at the outlet of each unit at steady state the product titer as well as the process (HCP, DNA, leached Protein A) and product (aggregates, fragments) related impurities. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1303-1313, 2017. © 2017 American Institute of Chemical Engineers.

  12. Fuzzy comprehensive evaluation for grid-connected performance of integrated distributed PV-ES systems

    NASA Astrophysics Data System (ADS)

    Lv, Z. H.; Li, Q.; Huang, R. W.; Liu, H. M.; Liu, D.

    2016-08-01

    Based on the discussion about topology structure of integrated distributed photovoltaic (PV) power generation system and energy storage (ES) in single or mixed type, this paper focuses on analyzing grid-connected performance of integrated distributed photovoltaic and energy storage (PV-ES) systems, and proposes a comprehensive evaluation index system. Then a multi-level fuzzy comprehensive evaluation method based on grey correlation degree is proposed, and the calculations for weight matrix and fuzzy matrix are presented step by step. Finally, a distributed integrated PV-ES power generation system connected to a 380 V low voltage distribution network is taken as the example, and some suggestions are made based on the evaluation results.

  13. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization.

    PubMed

    Chen, Guoliang; Meng, Xiaolin; Wang, Yunjia; Zhang, Yanzhe; Tian, Peng; Yang, Huachao

    2015-09-23

    Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone's acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals.

  14. Integrated WiFi/PDR/Smartphone Using an Unscented Kalman Filter Algorithm for 3D Indoor Localization

    PubMed Central

    Chen, Guoliang; Meng, Xiaolin; Wang, Yunjia; Zhang, Yanzhe; Tian, Peng; Yang, Huachao

    2015-01-01

    Because of the high calculation cost and poor performance of a traditional planar map when dealing with complicated indoor geographic information, a WiFi fingerprint indoor positioning system cannot be widely employed on a smartphone platform. By making full use of the hardware sensors embedded in the smartphone, this study proposes an integrated approach to a three-dimensional (3D) indoor positioning system. First, an improved K-means clustering method is adopted to reduce the fingerprint database retrieval time and enhance positioning efficiency. Next, with the mobile phone’s acceleration sensor, a new step counting method based on auto-correlation analysis is proposed to achieve cell phone inertial navigation positioning. Furthermore, the integration of WiFi positioning with Pedestrian Dead Reckoning (PDR) obtains higher positional accuracy with the help of the Unscented Kalman Filter algorithm. Finally, a hybrid 3D positioning system based on Unity 3D, which can carry out real-time positioning for targets in 3D scenes, is designed for the fluent operation of mobile terminals. PMID:26404314

  15. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  16. Lab-on-chip systems for integrated bioanalyses

    PubMed Central

    Madaboosi, Narayanan; Soares, Ruben R.G.; Fernandes, João Tiago S.; Novo, Pedro; Moulas, Geraud; Chu, Virginia

    2016-01-01

    Biomolecular detection systems based on microfluidics are often called lab-on-chip systems. To fully benefit from the miniaturization resulting from microfluidics, one aims to develop ‘from sample-to-answer’ analytical systems, in which the input is a raw or minimally processed biological, food/feed or environmental sample and the output is a quantitative or qualitative assessment of one or more analytes of interest. In general, such systems will require the integration of several steps or operations to perform their function. This review will discuss these stages of operation, including fluidic handling, which assures that the desired fluid arrives at a specific location at the right time and under the appropriate flow conditions; molecular recognition, which allows the capture of specific analytes at precise locations on the chip; transduction of the molecular recognition event into a measurable signal; sample preparation upstream from analyte capture; and signal amplification procedures to increase sensitivity. Seamless integration of the different stages is required to achieve a point-of-care/point-of-use lab-on-chip device that allows analyte detection at the relevant sensitivity ranges, with a competitive analysis time and cost. PMID:27365042

  17. Imaging of high-energy x-ray emission from cryogenic thermonuclear fuel implosions on the NIF.

    PubMed

    Ma, T; Izumi, N; Tommasini, R; Bradley, D K; Bell, P; Cerjan, C J; Dixit, S; Döppner, T; Jones, O; Kline, J L; Kyrala, G; Landen, O L; LePape, S; Mackinnon, A J; Park, H-S; Patel, P K; Prasad, R R; Ralph, J; Regan, S P; Smalyuk, V A; Springer, P T; Suter, L; Town, R P J; Weber, S V; Glenzer, S H

    2012-10-01

    Accurately assessing and optimizing the implosion performance of inertial confinement fusion capsules is a crucial step to achieving ignition on the NIF. We have applied differential filtering (matched Ross filter pairs) to provide broadband time-integrated absolute x-ray self-emission images of the imploded core of cryogenic layered implosions. This diagnostic measures the temperature- and density-sensitive bremsstrahlung emission and provides estimates of hot spot mass, mix mass, and pressure.

  18. Long-Term Durability and Integrity of Built-In Piezoelectric-Based Active Sensing Network in Structures

    DTIC Science & Technology

    2007-03-31

    iterating to the end-time step. 1.3 Code Verification 1.3.1 Statement of the Problem A square aluminum alloy plate (thickness = 1.02 mm, width and...plate. The electro-mechanical properties of the piezoelectric materials (APC850) are available from American Piezoceramics, Inc. . The piezoceramic...structural usage and provide an early indication of physical damage. Piezoelectric (PZT) based SHM systems are among the most widely used for active and

  19. Verlet scheme non-conservativeness for simulation of spherical particles collisional dynamics and method of its compensation

    NASA Astrophysics Data System (ADS)

    Savin, Andrei V.; Smirnov, Petr G.

    2018-05-01

    Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.

  20. Social Media: More Than Just a Communications Medium

    DTIC Science & Technology

    2012-03-14

    video-hosting web services with the recognition that “Internet-based capabilities are integral to operations across the Department of Defense.”10...as DoD and the government as a whole, the U.S. Army’s recognition of social media’s unique relationship to time and speed is a step forward toward...populated size of social media entities, Alexa , the leader in free global web analytics, provides an updated list of the top 500 websites on the Internet

Top