Sample records for efficient explicit-time description

  1. High-temperature ratchets with sawtooth potentials

    NASA Astrophysics Data System (ADS)

    Rozenbaum, Viktor M.; Shapochkina, Irina V.; Sheu, Sheh-Yi; Yang, Dah-Yen; Lin, Sheng Hsien

    2016-11-01

    The concept of the effective potential is suggested as an efficient instrument to get a uniform analytical description of stochastic high-temperature on-off flashing and rocking ratchets. The analytical representation for the average particle velocity, obtained within this technique, allows description of ratchets with sharp potentials (and potentials with jumps in particular). For sawtooth potentials, the explicit analytical expressions for the average velocity of on-off flashing and rocking ratchets valid for arbitrary frequencies of potential energy fluctuations are derived; the difference in their high-frequency asymptotics is explored for the smooth and cusped profiles, and profiles with jumps. The origin of the difference as well as the appearance of the jump behavior in ratchet characteristics are interpreted in terms of self-similar universal solutions which give the continuous description of the effect. It is shown how the jump behavior in motor characteristics arises from the competition between the characteristic times of the system.

  2. Direct evaluation of boson dynamics via finite-temperature time-dependent variation with multiple Davydov states.

    PubMed

    Fujihashi, Yuta; Wang, Lu; Zhao, Yang

    2017-12-21

    Recent advances in quantum optics allow for exploration of boson dynamics in dissipative many-body systems. However, the traditional descriptions of quantum dissipation using reduced density matrices are unable to capture explicit information of bath dynamics. In this work, efficient evaluation of boson dynamics is demonstrated by combining the multiple Davydov Ansatz with finite-temperature time-dependent variation, going beyond what state-of-the-art density matrix approaches are capable to offer for coupled electron-boson systems. To this end, applications are made to excitation energy transfer in photosynthetic systems, singlet fission in organic thin films, and circuit quantum electrodynamics in superconducting devices. Thanks to the multiple Davydov Ansatz, our analysis of boson dynamics leads to clear revelation of boson modes strongly coupled to electronic states, as well as in-depth description of polaron creation and destruction in the presence of thermal fluctuations.

  3. Education Finance: Legal Bombshells in West Virginia.

    ERIC Educational Resources Information Center

    Meckley, Richard

    1983-01-01

    Discusses the history, legal arguments, decision, and fiscal implications of "Pauley v. Bailey," in which a West Virginia circuit court judge's ruling gave explicit and detailed descriptions of a thorough and efficient K-12 education system. Briefly reviews two West Virginia supreme court decisions concerning property assessment…

  4. Ancient numerical daemons of conceptual hydrological modeling: 1. Fidelity and efficiency of time stepping schemes

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Kavetski, Dmitri

    2010-10-01

    A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.

  5. Improving smoothing efficiency of rigid conformal polishing tool using time-dependent smoothing evaluation model

    NASA Astrophysics Data System (ADS)

    Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng

    2017-06-01

    A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o

  6. An implicit dispersive transport algorithm for the US Geological Survey MOC3D solute-transport model

    USGS Publications Warehouse

    Kipp, K.L.; Konikow, Leonard F.; Hornberger, G.Z.

    1998-01-01

    This report documents an extension to the U.S. Geological Survey MOC3D transport model that incorporates an implicit-in-time difference approximation for the dispersive transport equation, including source/sink terms. The original MOC3D transport model (Version 1) uses the method of characteristics to solve the transport equation on the basis of the velocity field. The original MOC3D solution algorithm incorporates particle tracking to represent advective processes and an explicit finite-difference formulation to calculate dispersive fluxes. The new implicit procedure eliminates several stability criteria required for the previous explicit formulation. This allows much larger transport time increments to be used in dispersion-dominated problems. The decoupling of advective and dispersive transport in MOC3D, however, is unchanged. With the implicit extension, the MOC3D model is upgraded to Version 2. A description of the numerical method of the implicit dispersion calculation, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. Version 2 of MOC3D was evaluated for the same set of problems used for verification of Version 1. These test results indicate that the implicit calculation of Version 2 matches the accuracy of Version 1, yet is more efficient than the explicit calculation for transport problems that are characterized by a grid Peclet number less than about 1.0.

  7. Mean-Field Description of Ionic Size Effects with Non-Uniform Ionic Sizes: A Numerical Approach

    PubMed Central

    Zhou, Shenggao; Wang, Zhongming; Li, Bo

    2013-01-01

    Ionic size effects are significant in many biological systems. Mean-field descriptions of such effects can be efficient but also challenging. When ionic sizes are different, explicit formulas in such descriptions are not available for the dependence of the ionic concentrations on the electrostatic potential, i.e., there is no explicit, Boltzmann type distributions. This work begins with a variational formulation of the continuum electrostatics of an ionic solution with such non-uniform ionic sizes as well as multiple ionic valences. An augmented Lagrange multiplier method is then developed and implemented to numerically solve the underlying constrained optimization problem. The method is shown to be accurate and efficient, and is applied to ionic systems with non-uniform ionic sizes such as the sodium chloride solution. Extensive numerical tests demonstrate that the mean-field model and numerical method capture qualitatively some significant ionic size effects, particularly those for multivalent ionic solutions, such as the stratification of multivalent counterions near a charged surface. The ionic valence-to-volume ratio is found to be the key physical parameter in the stratification of concentrations. All these are not well described by the classical Poisson–Boltzmann theory, or the generalized Poisson–Boltzmann theory that treats uniform ionic sizes. Finally, various issues such as the close packing, limitation of the continuum model, and generalization of this work to molecular solvation are discussed. PMID:21929014

  8. Explicit evaluation of discontinuities in 2-D unsteady flows solved by the method of characteristics

    NASA Astrophysics Data System (ADS)

    Osnaghi, C.

    When shock waves appear in the numerical solution of flows, a choice is necessary between shock capturing techniques, possible when equations are written in conservative form, and shock fitting techniques. If the second one is preferred, e.g. in order to obtain better definition and more physical description of the shock evolution in time, the method of characteristics is advantageous in the vicinity of the shock and it seems natural to use this method everywhere. This choice requires to improve the efficiency of the numerical scheme in order to produce competitive codes, preserving accuracy and flexibility, which are intrinsic features of the method: this is the goal of the present work.

  9. Kato expansion in quantum canonical perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolaev, Andrey, E-mail: Andrey.Nikolaev@rdtex.ru

    2016-06-15

    This work establishes a connection between canonical perturbation series in quantum mechanics and a Kato expansion for the resolvent of the Liouville superoperator. Our approach leads to an explicit expression for a generator of a block-diagonalizing Dyson’s ordered exponential in arbitrary perturbation order. Unitary intertwining of perturbed and unperturbed averaging superprojectors allows for a description of ambiguities in the generator and block-diagonalized Hamiltonian. We compare the efficiency of the corresponding computational algorithm with the efficiencies of the Van Vleck and Magnus methods for high perturbative orders.

  10. Unstructured grid methods for the simulation of 3D transient flows

    NASA Technical Reports Server (NTRS)

    Morgan, K.; Peraire, J.; Peiro, J.

    1994-01-01

    A description of the research work undertaken under NASA Research Grant NAGW-2962 has been given. Basic algorithmic development work, undertaken for the simulation of steady three dimensional inviscid flow, has been used as the basis for the construction of a procedure for the simulation of truly transient flows in three dimensions. To produce a viable procedure for implementation on the current generation of computers, moving boundary components are simulated by fixed boundaries plus a suitably modified boundary condition. Computational efficiency is increased by the use of an implicit time stepping scheme in which the equation system is solved by explicit multistage time stepping with multigrid acceleration. The viability of the proposed approach has been demonstrated by considering the application of the procedure to simulation of a transonic flow over an oscillating ONERA M6 wing.

  11. Algorithms and software for nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.

    1989-01-01

    The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.

  12. Asynchronous collision integrators: Explicit treatment of unilateral contact with friction and nodal restraints

    PubMed Central

    Wolff, Sebastian; Bucher, Christian

    2013-01-01

    This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806

  13. An efficient, explicit finite-rate algorithm to compute flows in chemical nonequilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    An explicit finite-rate code was developed to compute hypersonic viscous chemically reacting flows about three-dimensional bodies. Equations describing the finite-rate chemical reactions were fully coupled to the gas dynamic equations using a new coupling technique. The new technique maintains stability in the explicit finite-rate formulation while permitting relatively large global time steps.

  14. Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.

    1985-01-01

    An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flows.

  15. Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.

    1985-01-01

    An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flow.

  16. A three-dimensional method-of-characteristics solute-transport model (MOC3D)

    USGS Publications Warehouse

    Konikow, Leonard F.; Goode, D.J.; Hornberger, G.Z.

    1996-01-01

    This report presents a model, MOC3D, that simulates three-dimensional solute transport in flowing ground water. The model computes changes in concentration of a single dissolved chemical constituent over time that are caused by advective transport, hydrodynamic dispersion (including both mechanical dispersion and diffusion), mixing (or dilution) from fluid sources, and mathematically simple chemical reactions (including linear sorption, which is represented by a retardation factor, and decay). The transport model is integrated with MODFLOW, a three-dimensional ground-water flow model that uses implicit finite-difference methods to solve the transient flow equation. MOC3D uses the method of characteristics to solve the transport equation on the basis of the hydraulic gradients computed with MODFLOW for a given time step. This implementation of the method of characteristics uses particle tracking to represent advective transport and explicit finite-difference methods to calculate the effects of other processes. However, the explicit procedure has several stability criteria that may limit the size of time increments for solving the transport equation; these are automatically determined by the program. For improved efficiency, the user can apply MOC3D to a subgrid of the primary MODFLOW grid that is used to solve the flow equation. However, the transport subgrid must have uniform grid spacing along rows and columns. The report includes a description of the theoretical basis of the model, a detailed description of input requirements and output options, and the results of model testing and evaluation. The model was evaluated for several problems for which exact analytical solutions are available and by benchmarking against other numerical codes for selected complex problems for which no exact solutions are available. These test results indicate that the model is very accurate for a wide range of conditions and yields minimal numerical dispersion for advection-dominated problems. Mass-balance errors are generally less than 10 percent, and tend to decrease and stabilize with time.

  17. Modelling radionuclide transport in fractured media with a dynamic update of K d values

    DOE PAGES

    Trinchero, Paolo; Painter, Scott L.; Ebrahimi, Hedieh; ...

    2015-10-13

    Radionuclide transport in fractured crystalline rocks is a process of interest in evaluating long term safety of potential disposal systems for radioactive wastes. Given their numerical efficiency and the absence of numerical dispersion, Lagrangian methods (e.g. particle tracking algorithms) are appealing approaches that are often used in safety assessment (SA) analyses. In these approaches, many complex geochemical retention processes are typically lumped into a single parameter: the distribution coefficient (Kd). Usually, the distribution coefficient is assumed to be constant over the time frame of interest. However, this assumption could be critical under long-term geochemical changes as it is demonstrated thatmore » the distribution coefficient depends on the background chemical conditions (e.g. pH, Eh, and major chemistry). In this study, we provide a computational framework that combines the efficiency of Lagrangian methods with a sound and explicit description of the geochemical changes of the site and their influence on the radionuclide retention properties.« less

  18. Geometric multigrid for an implicit-time immersed boundary method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.

    2014-10-12

    The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less

  19. 50 CFR 600.310 - National Standard 1-Optimum Yield.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... At the time a stock complex is established, the FMP should provide a full and explicit description of... complex. When indicator stock(s) are used, periodic re-evaluation of available quantitative or qualitative... sufficiently to allow rebuilding within an acceptable time frame (also see paragraph (j)(3)(ii) of this section...

  20. 50 CFR 600.310 - National Standard 1-Optimum Yield.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... At the time a stock complex is established, the FMP should provide a full and explicit description of... complex. When indicator stock(s) are used, periodic re-evaluation of available quantitative or qualitative... sufficiently to allow rebuilding within an acceptable time frame (also see paragraph (j)(3)(ii) of this section...

  1. 50 CFR 600.310 - National Standard 1-Optimum Yield.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... At the time a stock complex is established, the FMP should provide a full and explicit description of... complex. When indicator stock(s) are used, periodic re-evaluation of available quantitative or qualitative... sufficiently to allow rebuilding within an acceptable time frame (also see paragraph (j)(3)(ii) of this section...

  2. 50 CFR 600.310 - National Standard 1-Optimum Yield.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... At the time a stock complex is established, the FMP should provide a full and explicit description of... complex. When indicator stock(s) are used, periodic re-evaluation of available quantitative or qualitative... sufficiently to allow rebuilding within an acceptable time frame (also see paragraph (j)(3)(ii) of this section...

  3. 50 CFR 600.310 - National Standard 1-Optimum Yield.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... At the time a stock complex is established, the FMP should provide a full and explicit description of... complex. When indicator stock(s) are used, periodic re-evaluation of available quantitative or qualitative... sufficiently to allow rebuilding within an acceptable time frame (also see paragraph (j)(3)(ii) of this section...

  4. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  5. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  6. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  7. A time-spectral approach to numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  8. SOCIO-ETHICAL ISSUES IN PERSONALIZED MEDICINE: A SYSTEMATIC REVIEW OF ENGLISH LANGUAGE HEALTH TECHNOLOGY ASSESSMENTS OF GENE EXPRESSION PROFILING TESTS FOR BREAST CANCER PROGNOSIS.

    PubMed

    Ali-Khan, Sarah E; Black, Lee; Palmour, Nicole; Hallett, Michael T; Avard, Denise

    2015-01-01

    There have been multiple calls for explicit integration of ethical, legal, and social issues (ELSI) in health technology assessment (HTA) and addressing ELSI has been highlighted as key in optimizing benefits in the Omics/Personalized Medicine field. This study examines HTAs of an early clinical example of Personalized Medicine (gene expression profile tests [GEP] for breast cancer prognosis) aiming to: (i) identify ELSI; (ii) assess whether ELSIs are implicitly or explicitly addressed; and (iii) report methodology used for ELSI integration. A systematic search for HTAs (January 2004 to September 2012), followed by descriptive and qualitative content analysis. Seventeen HTAs for GEP were retrieved. Only three (18%) explicitly presented ELSI, and only one reported methodology. However, all of the HTAs included implicit ELSI. Eight themes of implicit and explicit ELSI were identified. "Classical" ELSI including privacy, informed consent, and concerns about limited patient/clinician genetic literacy were always presented explicitly. Some ELSI, including the need to understand how individual patients' risk tolerances affect clinical decision-making after reception of GEP results, were presented both explicitly and implicitly in HTAs. Others, such as concern about evidentiary deficiencies for clinical utility of GEP tests, occurred only implicitly. Despite a wide variety of important ELSI raised, these were rarely explicitly addressed in HTAs. Explicit treatment would increase their accessibility to decision-makers, and may augment HTA efficiency maximizing their utility. This is particularly important where complex Personalized Medicine applications are rapidly expanding choices for patients, clinicians and healthcare systems.

  9. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  10. A three-dimensional finite-volume Eulerian-Lagrangian Localized Adjoint Method (ELLAM) for solute-transport modeling

    USGS Publications Warehouse

    Heberton, C.I.; Russell, T.F.; Konikow, Leonard F.; Hornberger, G.Z.

    2000-01-01

    This report documents the U.S. Geological Survey Eulerian-Lagrangian Localized Adjoint Method (ELLAM) algorithm that solves an integral form of the solute-transport equation, incorporating an implicit-in-time difference approximation for the dispersive and sink terms. Like the algorithm in the original version of the U.S. Geological Survey MOC3D transport model, ELLAM uses a method of characteristics approach to solve the transport equation on the basis of the velocity field. The ELLAM algorithm, however, is based on an integral formulation of conservation of mass and uses appropriate numerical techniques to obtain global conservation of mass. The implicit procedure eliminates several stability criteria required for an explicit formulation. Consequently, ELLAM allows large transport time increments to be used. ELLAM can produce qualitatively good results using a small number of transport time steps. A description of the ELLAM numerical method, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. The ELLAM algorithm was evaluated for the same set of problems used to test and evaluate Version 1 and Version 2 of MOC3D. These test results indicate that ELLAM offers a viable alternative to the explicit and implicit solvers in MOC3D. Its use is desirable when mass balance is imperative or a fast, qualitative model result is needed. Although accurate solutions can be generated using ELLAM, its efficiency relative to the two previously documented solution algorithms is problem dependent.

  11. Efficiency Study of Implicit and Explicit Time Integration Operators for Finite Element Applications

    DTIC Science & Technology

    1977-07-01

    cffiAciency, wherein Beta =0 provides anl exp~licit algorithm, wvhile Beta &0 provides anl implicit algorithm. Both algorithmns arc used in the same...Hlueneme CA: CO, Code C44A Port j IHuenemne, CA NAVSEC Cod,. 6034 (Library), Washington DC NAVSI*CGRUAC’I’ PWO, ’rorri Sta, OkinawaI NAVSIIIPRBFTAC Library

  12. Kinetic Equation for an Unstable Plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balescu, R.

    1963-01-01

    A kinetic equation is derived for the description of the evolution in time of the distribution of velocities in a spatially homogeneous ionized gas that, at the initial time, is able to sustain exponentially growing oscillations. This equation is expressed in terms of a functional of the distribution finction that obeys the same integral equation as in the stable case. Although the method of solution used in the stable case breaks down, the equation can still be solved in closed form under unstable conditions, and hence an explicit form of the kinetic equation is obtained. The latter contains the normalmore » collision term and a new additional term describing the stabilization of the plasma. The latter acts through friction and diffusion and brings the plasma into a state of neutral stability. From there on the system evolves toward thermal equilibrium under the action of the normal collision term as well as of an additional Fokker-Planck- like term with timedependent coefficients, which however becomes less and less efficient as the plasma approaches equilibrium.« less

  13. An adaptive, implicit, conservative, 1D-2V multi-species Vlasov-Fokker-Planck multi-scale solver in planar geometry

    NASA Astrophysics Data System (ADS)

    Taitano, W. T.; Chacón, L.; Simakov, A. N.

    2018-07-01

    We consider a 1D-2V Vlasov-Fokker-Planck multi-species ionic description coupled to fluid electrons. We address temporal stiffness with implicit time stepping, suitably preconditioned. To address temperature disparity in time and space, we extend the conservative adaptive velocity-space discretization scheme proposed in [Taitano et al., J. Comput. Phys., 318, 391-420, (2016)] to a spatially inhomogeneous system. In this approach, we normalize the velocity-space coordinate to a temporally and spatially varying local characteristic speed per species. We explicitly consider the resulting inertial terms in the Vlasov equation, and derive a discrete formulation that conserves mass, momentum, and energy up to a prescribed nonlinear tolerance upon convergence. Our conservation strategy employs nonlinear constraints to enforce these properties discretely for both the Vlasov operator and the Fokker-Planck collision operator. Numerical examples of varying degrees of complexity, including shock-wave propagation, demonstrate the favorable efficiency and accuracy properties of the scheme.

  14. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  15. Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD

    DOE PAGES

    Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...

    2017-03-24

    Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less

  16. Real-space decoupling transformation for quantum many-body systems.

    PubMed

    Evenbly, G; Vidal, G

    2014-06-06

    We propose a real-space renormalization group method to explicitly decouple into independent components a many-body system that, as in the phenomenon of spin-charge separation, exhibits separation of degrees of freedom at low energies. Our approach produces a branching holographic description of such systems that opens the path to the efficient simulation of the most entangled phases of quantum matter, such as those whose ground state violates a boundary law for entanglement entropy. As in the coarse-graining transformation of Vidal [Phys. Rev. Lett. 99, 220405 (2007).

  17. Implicit Geometry Meshing for the simulation of Rotary Friction Welding

    NASA Astrophysics Data System (ADS)

    Schmicker, D.; Persson, P.-O.; Strackeljan, J.

    2014-08-01

    The simulation of Rotary Friction Welding (RFW) is a challenging task, since it states a coupled problem of phenomena like large plastic deformations, heat flux, contact and friction. In particular the mesh generation and its restoration when using a Lagrangian description of motion is of significant severity. In this regard Implicit Geometry Meshing (IGM) algorithms are promising alternatives to the more conventional explicit methods. Because of the implicit description of the geometry during remeshing, the IGM procedure turns out to be highly robust and generates spatial discretizations of high quality regardless of the complexity of the flash shape and its inclusions. A model for efficient RFW simulation is presented, which is based on a Carreau fluid law, an Augmented Lagrange approach in mapping the incompressible deformations, a penalty contact approach, a fully regularized Coulomb-/fluid friction law and a hybrid time integration strategy. The implementation of the IGM algorithm using 6-node triangular finite elements is described in detail. The techniques are demonstrated on a fairly complex friction welding problem, demonstrating the performance and the potentials of the proposed method. The techniques are general and straight-forward to implement, and offer the potential of successful adoption to a wide range of other engineering problems.

  18. The Impact of Instruction on Second-Language Implicit Knowledge: Evidence against Encapsulation

    ERIC Educational Resources Information Center

    Toth, Paul D.; Guijarro-Fuentes, Pedro

    2013-01-01

    This paper compares explicit instruction in second-language Spanish with a control treatment on a written picture description task and a timed auditory grammaticality judgment task. Participants came from two intact, third-year US high school classes, with one experiencing a week of communicative lessons on the Spanish clitic "se"…

  19. Generalized Born Models of Macromolecular Solvation Effects

    NASA Astrophysics Data System (ADS)

    Bashford, Donald; Case, David A.

    2000-10-01

    It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.

  20. Identification of internal properties of fibres and micro-swimmers

    NASA Astrophysics Data System (ADS)

    Plouraboué, Franck; Thiam, E. Ibrahima; Delmotte, Blaise; Climent, Eric

    2017-01-01

    In this paper, we address the identifiability of constitutive parameters of passive or active micro-swimmers. We first present a general framework for describing fibres or micro-swimmers using a bead-model description. Using a kinematic constraint formulation to describe fibres, flagellum or cilia, we find explicit linear relationship between elastic constitutive parameters and generalized velocities from computing contact forces. This linear formulation then permits one to address explicitly identifiability conditions and solve for parameter identification. We show that both active forcing and passive parameters are both identifiable independently but not simultaneously. We also provide unbiased estimators for generalized elastic parameters in the presence of Langevin-like forcing with Gaussian noise using a Bayesian approach. These theoretical results are illustrated in various configurations showing the efficiency of the proposed approach for direct parameter identification. The convergence of the proposed estimators is successfully tested numerically.

  1. Geometric Heat Engines Featuring Power that Grows with Efficiency.

    PubMed

    Raz, O; Subaşı, Y; Pugatch, R

    2016-04-22

    Thermodynamics places a limit on the efficiency of heat engines, but not on their output power or on how the power and efficiency change with the engine's cycle time. In this Letter, we develop a geometrical description of the power and efficiency as a function of the cycle time, applicable to an important class of heat engine models. This geometrical description is used to design engine protocols that attain both the maximal power and maximal efficiency at the fast driving limit. Furthermore, using this method, we also prove that no protocol can exactly attain the Carnot efficiency at nonzero power.

  2. Parallel CE/SE Computations via Domain Decomposition

    NASA Technical Reports Server (NTRS)

    Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung

    2000-01-01

    This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.

  3. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  4. A novel binary shape context for 3D local surface description

    NASA Astrophysics Data System (ADS)

    Dong, Zhen; Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Li, Bijun; Zang, Yufu

    2017-08-01

    3D local surface description is now at the core of many computer vision technologies, such as 3D object recognition, intelligent driving, and 3D model reconstruction. However, most of the existing 3D feature descriptors still suffer from low descriptiveness, weak robustness, and inefficiency in both time and memory. To overcome these challenges, this paper presents a robust and descriptive 3D Binary Shape Context (BSC) descriptor with high efficiency in both time and memory. First, a novel BSC descriptor is generated for 3D local surface description, and the performance of the BSC descriptor under different settings of its parameters is analyzed. Next, the descriptiveness, robustness, and efficiency in both time and memory of the BSC descriptor are evaluated and compared to those of several state-of-the-art 3D feature descriptors. Finally, the performance of the BSC descriptor for 3D object recognition is also evaluated on a number of popular benchmark datasets, and an urban-scene dataset is collected by a terrestrial laser scanner system. Comprehensive experiments demonstrate that the proposed BSC descriptor obtained high descriptiveness, strong robustness, and high efficiency in both time and memory and achieved high recognition rates of 94.8%, 94.1% and 82.1% on the considered UWA, Queen, and WHU datasets, respectively.

  5. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  6. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1983-01-01

    The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  7. Accuracy of an unstructured-grid upwind-Euler algorithm for the ONERA M6 wing

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1991-01-01

    Improved algorithms for the solution of the three-dimensional, time-dependent Euler equations are presented for aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured-grid flow solvers. The spatial discretization involves a flux-split approach that is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves either an explicit time-integration scheme using a multistage Runge-Kutta procedure or an implicit time-integration scheme using a Gauss-Seidel relaxation procedure, which is computationally efficient for either steady or unsteady flow problems. With the implicit Gauss-Seidel procedure, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady flow results are presented for both the NACA 0012 airfoil and the Office National d'Etudes et de Recherches Aerospatiales M6 wing to demonstrate applications of the new Euler solvers. The paper presents a description of the Euler solvers along with results and comparisons that assess the capability.

  8. Subcritical saturation of the magnetorotational instability through mean magnetic field generation

    NASA Astrophysics Data System (ADS)

    Xie, Jin-Han; Julien, Keith; Knobloch, Edgar

    2018-03-01

    The magnetorotational instability is widely believed to be responsible for outward angular momentum transport in astrophysical accretion discs. The efficiency of this transport depends on the amplitude of this instability in the saturated state. We employ an asymptotic expansion based on an explicit, astrophysically motivated time-scale separation between the orbital period, Alfvén crossing time and viscous or resistive dissipation time-scales, originally proposed by Knobloch and Julien, to formulate a semi-analytical description of the saturated state in an incompressible disc. In our approach a Keplerian shear flow is maintained by the central mass but the instability saturates via the generation of a mean vertical magnetic field. The theory assumes that the time-averaged angular momentum flux and the radial magnetic flux are constant and determines both self-consistently. The results predict that, depending on parameters, steady saturation may be supercritical or subcritical, and in the latter case that the upper (lower) solution branch is always stable (unstable). The angular momentum flux is always outward, consistent with the presence of accretion, and for fixed wavenumber peaks in the subcritical regime. The limit of infinite Reynolds number at large but finite magnetic Reynolds number is also discussed.

  9. Connecting Free Energy Surfaces in Implicit and Explicit Solvent: an Efficient Method to Compute Conformational and Solvation Free Energies

    PubMed Central

    Deng, Nanjie; Zhang, Bin W.; Levy, Ronald M.

    2015-01-01

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions and protein-ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ~3 kcal/mol at only ~8 % of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the explicit/implicit thermodynamic cycle. PMID:26236174

  10. Connecting free energy surfaces in implicit and explicit solvent: an efficient method to compute conformational and solvation free energies.

    PubMed

    Deng, Nanjie; Zhang, Bin W; Levy, Ronald M

    2015-06-09

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.

  11. The importance of educational theories for facilitating learning when using technology in medical education.

    PubMed

    Sandars, John; Patel, Rakesh S; Goh, Poh Sun; Kokatailo, Patricia K; Lafferty, Natalie

    2015-01-01

    There is an increasing use of technology for teaching and learning in medical education but often the use of educational theory to inform the design is not made explicit. The educational theories, both normative and descriptive, used by medical educators determine how the technology is intended to facilitate learning and may explain why some interventions with technology may be less effective compared with others. The aim of this study is to highlight the importance of medical educators making explicit the educational theories that inform their design of interventions using technology. The use of illustrative examples of the main educational theories to demonstrate the importance of theories informing the design of interventions using technology. Highlights the use of educational theories for theory-based and realistic evaluations of the use of technology in medical education. An explicit description of the educational theories used to inform the design of an intervention with technology can provide potentially useful insights into why some interventions with technology are more effective than others. An explicit description is also an important aspect of the scholarship of using technology in medical education.

  12. Efficient Simulation of Explicitly Solvated Proteins in the Well-Tempered Ensemble.

    PubMed

    Deighan, Michael; Bonomi, Massimiliano; Pfaendtner, Jim

    2012-07-10

    Herein, we report significant reduction in the cost of combined parallel tempering and metadynamics simulations (PTMetaD). The efficiency boost is achieved using the recently proposed well-tempered ensemble (WTE) algorithm. We studied the convergence of PTMetaD-WTE conformational sampling and free energy reconstruction of an explicitly solvated 20-residue tryptophan-cage protein (trp-cage). A set of PTMetaD-WTE simulations was compared to a corresponding standard PTMetaD simulation. The properties of PTMetaD-WTE and the convergence of the calculations were compared. The roles of the number of replicas, total simulation time, and adjustable WTE parameter γ were studied.

  13. Stability of mixed time integration schemes for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Lin, J. I.

    1982-01-01

    A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.

  14. Importance of semicore states in GW calculations for simulating accurately the photoemission spectra of metal phthalocyanine molecules.

    PubMed

    Umari, P; Fabris, S

    2012-05-07

    The quasi-particle energy levels of the Zn-Phthalocyanine (ZnPc) molecule calculated with the GW approximation are shown to depend sensitively on the explicit description of the metal-center semicore states. We find that the calculated GW energy levels are in good agreement with the measured experimental photoemission spectra only when explicitly including the Zn 3s and 3p semicore states in the valence. The main origin of this effect is traced back to the exchange term in the self-energy GW approximation. Based on this finding, we propose a simplified approach for correcting GW calculations of metal phthalocyanine molecules that avoids the time-consuming explicit treatment of the metal semicore states. Our method allows for speeding up the calculations without compromising the accuracy of the computed spectra.

  15. Eleventh-Grade High School Students' Accounts of Mathematical Metacognitive Knowledge: Explicitness and Systematicity

    ERIC Educational Resources Information Center

    van Velzen, Joke H.

    2016-01-01

    Theoretically, it has been argued that a conscious understanding of metacognitive knowledge requires that this knowledge is explicit and systematic. The purpose of this descriptive study was to obtain a better understanding of explicitness and systematicity in knowledge of the mathematical problem-solving process. Eighteen 11th-grade…

  16. The Ms. Stereotype Revisited: Implicit and Explicit Facets

    ERIC Educational Resources Information Center

    Malcolmson, Kelly A.; Sinclair, Lisa

    2007-01-01

    Implicit and explicit stereotypes toward the title Ms. were examined. Participants read a short description of a target person whose title of address varied (Ms., Mrs., Miss, Mr.). They then rated the person on agentic and communal traits and completed an Implicit Association Test. Replicating earlier research (Dion, 1987), at an explicit level,…

  17. On the development of efficient algorithms for three dimensional fluid flow

    NASA Technical Reports Server (NTRS)

    Maccormack, R. W.

    1988-01-01

    The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.

  18. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  19. Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.

    2016-12-01

    The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187

  20. Embedded-explicit emergent literacy intervention I: Background and description of approach.

    PubMed

    Justice, Laura M; Kaderavek, Joan N

    2004-07-01

    This article, the first of a two-part series, provides background information and a general description of an emergent literacy intervention model for at-risk preschoolers and kindergartners. The embedded-explicit intervention model emphasizes the dual importance of providing young children with socially embedded opportunities for meaningful, naturalistic literacy experiences throughout the day, in addition to regular structured therapeutic interactions that explicitly target critical emergent literacy goals. The role of the speech-language pathologist (SLP) in the embedded-explicit model encompasses both indirect and direct service delivery: The SLP consults and collaborates with teachers and parents to ensure the highest quality and quantity of socially embedded literacy-focused experiences and serves as a direct provider of explicit interventions using structured curricula and/or lesson plans. The goal of this integrated model is to provide comprehensive emergent literacy interventions across a spectrum of early literacy skills to ensure the successful transition of at-risk children from prereaders to readers.

  1. Efficient Skeletonization of Volumetric Objects.

    PubMed

    Zhou, Yong; Toga, Arthur W

    1999-07-01

    Skeletonization promises to become a powerful tool for compact shape description, path planning, and other applications. However, current techniques can seldom efficiently process real, complicated 3D data sets, such as MRI and CT data of human organs. In this paper, we present an efficient voxel-coding based algorithm for Skeletonization of 3D voxelized objects. The skeletons are interpreted as connected centerlines. consisting of sequences of medial points of consecutive clusters. These centerlines are initially extracted as paths of voxels, followed by medial point replacement, refinement, smoothness, and connection operations. The voxel-coding techniques have been proposed for each of these operations in a uniform and systematic fashion. In addition to preserving basic connectivity and centeredness, the algorithm is characterized by straightforward computation, no sensitivity to object boundary complexity, explicit extraction of ready-to-parameterize and branch-controlled skeletons, and efficient object hole detection. These issues are rarely discussed in traditional methods. A range of 3D medical MRI and CT data sets were used for testing the algorithm, demonstrating its utility.

  2. Efficient Translation of LTL Formulae into Buchi Automata

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Lerda, Flavio

    2001-01-01

    Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.

  3. Combined AIE/EBE/GMRES approach to incompressible flows. [Adaptive Implicit-Explicit/Grouped Element-by-Element/Generalized Minimum Residuals

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1990-01-01

    Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.

  4. Medication Reconciliation: Work Domain Ontology, prototype development, and a predictive model.

    PubMed

    Markowitz, Eliz; Bernstam, Elmer V; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R

    2011-01-01

    Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System's and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load.

  5. Medication Reconciliation: Work Domain Ontology, Prototype Development, and a Predictive Model

    PubMed Central

    Markowitz, Eliz; Bernstam, Elmer V.; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R.

    2011-01-01

    Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System’s and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load. PMID:22195146

  6. IMEX HDG-DG: A coupled implicit hybridized discontinuous Galerkin and explicit discontinuous Galerkin approach for Euler systems on cubed sphere.

    NASA Astrophysics Data System (ADS)

    Kang, S.; Muralikrishnan, S.; Bui-Thanh, T.

    2017-12-01

    We propose IMEX HDG-DG schemes for Euler systems on cubed sphere. Of interest is subsonic flow, where the speed of the acoustic wave is faster than that of the nonlinear advection. In order to simulate these flows efficiently, we split the governing system into stiff part describing the fast waves and non-stiff part associated with nonlinear advection. The former is discretized implicitly with HDG method while explicit Runge-Kutta DG discretization is employed for the latter. The proposed IMEX HDG-DG framework: 1) facilitates high-order solution both in time and space; 2) avoids overly small time stepsizes; 3) requires only one linear system solve per time step; and 4) relatively to DG generates smaller and sparser linear system while promoting further parallelism owing to HDG discretization. Numerical results for various test cases demonstrate that our methods are comparable to explicit Runge-Kutta DG schemes in terms of accuracy, while allowing for much larger time stepsizes.

  7. Should Cost-Effectiveness Analysis Include the Cost of Consumption Activities? AN Empirical Investigation.

    PubMed

    Adarkwah, Charles Christian; Sadoghi, Amirhossein; Gandjour, Afschin

    2016-02-01

    There has been a debate on whether cost-effectiveness analysis should consider the cost of consumption and leisure time activities when using the quality-adjusted life year as a measure of health outcome under a societal perspective. The purpose of this study was to investigate whether the effects of ill health on consumptive activities are spontaneously considered in a health state valuation exercise and how much this matters. The survey enrolled patients with inflammatory bowel disease in Germany (n = 104). Patients were randomized to explicit and no explicit instruction for the consideration of consumption and leisure effects in a time trade-off (TTO) exercise. Explicit instruction to consider non-health-related utility in TTO exercises did not influence TTO scores. However, spontaneous consideration of non-health-related utility in patients without explicit instruction (60% of respondents) led to significantly lower TTO scores. Results suggest an inclusion of consumption costs in the numerator of the cost-effectiveness ratio, at least for those respondents who spontaneously consider non-health-related utility from treatment. Results also suggest that exercises eliciting health valuations from the general public may include a description of the impact of disease on consumptive activities. Copyright © 2015 John Wiley & Sons, Ltd.

  8. A new solution method for wheel/rail rolling contact.

    PubMed

    Yang, Jian; Song, Hua; Fu, Lihua; Wang, Meng; Li, Wei

    2016-01-01

    To solve the problem of wheel/rail rolling contact of nonlinear steady-state curving, a three-dimensional transient finite element (FE) model is developed by the explicit software ANSYS/LS-DYNA. To improve the solving speed and efficiency, an explicit-explicit order solution method is put forward based on analysis of the features of implicit and explicit algorithm. The solution method was first applied to calculate the pre-loading of wheel/rail rolling contact with explicit algorithm, and then the results became the initial conditions in solving the dynamic process of wheel/rail rolling contact with explicit algorithm as well. Simultaneously, the common implicit-explicit order solution method is used to solve the FE model. Results show that the explicit-explicit order solution method has faster operation speed and higher efficiency than the implicit-explicit order solution method while the solution accuracy is almost the same. Hence, the explicit-explicit order solution method is more suitable for the wheel/rail rolling contact model with large scale and high nonlinearity.

  9. A point implicit time integration technique for slow transient flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-05-01

    We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less

  10. Identification of internal properties of fibers and micro-swimmers

    NASA Astrophysics Data System (ADS)

    Plouraboue, Franck; Thiam, Ibrahima; Delmotte, Blaise; Climent, Eric; PSC Collaboration

    2016-11-01

    In this presentation we discuss the identifiability of constitutive parameters of passive or active micro-swimmers. We first present a general framework for describing fibers or micro-swimmers using a bead-model description. Using a kinematic constraint formulation to describe fibers, flagellum or cilia, we find explicit linear relationship between elastic constitutive parameters and generalised velocities from computing contact forces. This linear formulation then permits to address explicitly identifiability conditions and solve for parameter identification. We show that both active forcing and passive parameters are both identifiable independently but not simultaneously. We also provide unbiased estimators for elastic parameters as well as active ones in the presence of Langevin-like forcing with Gaussian noise using normal linear regression models and maximum likelihood method. These theoretical results are illustrated in various configurations of relaxed or actuated passives fibers, and active filament of known passive properties, showing the efficiency of the proposed approach for direct parameter identification. The convergence of the proposed estimators is successfully tested numerically.

  11. A multi-dimensional nonlinearly implicit, electromagnetic Vlasov-Darwin particle-in-cell (PIC) algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacón, Luis; CoCoMans Team

    2014-10-01

    For decades, the Vlasov-Darwin model has been recognized to be attractive for PIC simulations (to avoid radiative noise issues) in non-radiative electromagnetic regimes. However, the Darwin model results in elliptic field equations that renders explicit time integration unconditionally unstable. Improving on linearly implicit schemes, fully implicit PIC algorithms for both electrostatic and electromagnetic regimes, with exact discrete energy and charge conservation properties, have been recently developed in 1D. This study builds on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the particle-field equations in multiple dimensions. The algorithm conserves energy, charge, and canonical-momentum exactly, even with grid packing. A simple fluid preconditioner allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. We demonstrate the accuracy and efficiency properties of the of the algorithm with various numerical experiments in 2D3V.

  12. Multigrid time-accurate integration of Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  13. Multi-model predictive control based on LMI: from the adaptation of the state-space model to the analytic description of the control law

    NASA Astrophysics Data System (ADS)

    Falugi, P.; Olaru, S.; Dumur, D.

    2010-08-01

    This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.

  14. Analytic descriptions of cylindrical electromagnetic waves in a nonlinear medium

    PubMed Central

    Xiong, Hao; Si, Liu-Gang; Yang, Xiaoxue; Wu, Ying

    2015-01-01

    A simple but highly efficient approach for dealing with the problem of cylindrical electromagnetic waves propagation in a nonlinear medium is proposed based on an exact solution proposed recently. We derive an analytical explicit formula, which exhibiting rich interesting nonlinear effects, to describe the propagation of any amount of cylindrical electromagnetic waves in a nonlinear medium. The results obtained by using the present method are accurately concordant with the results of using traditional coupled-wave equations. As an example of application, we discuss how a third wave affects the sum- and difference-frequency generation of two waves propagation in the nonlinear medium. PMID:26073066

  15. Time-splitting combined with exponential wave integrator fourier pseudospectral method for Schrödinger-Boussinesq system

    NASA Astrophysics Data System (ADS)

    Liao, Feng; Zhang, Luming; Wang, Shanshan

    2018-02-01

    In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.

  16. An in-depth stability analysis of nonuniform FDTD combined with novel local implicitization techniques

    NASA Astrophysics Data System (ADS)

    Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries

    2017-08-01

    This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.

  17. Using a Delphi Technique to Seek Consensus Regarding Definitions, Descriptions and Classification of Terms Related to Implicit and Explicit Forms of Motor Learning

    PubMed Central

    Kleynen, Melanie; Braun, Susy M.; Bleijlevens, Michel H.; Lexis, Monique A.; Rasquin, Sascha M.; Halfens, Jos; Wilson, Mark R.; Beurskens, Anna J.; Masters, Rich S. W.

    2014-01-01

    Background Motor learning is central to domains such as sports and rehabilitation; however, often terminologies are insufficiently uniform to allow effective sharing of experience or translation of knowledge. A study using a Delphi technique was conducted to ascertain level of agreement between experts from different motor learning domains (i.e., therapists, coaches, researchers) with respect to definitions and descriptions of a fundamental conceptual distinction within motor learning, namely implicit and explicit motor learning. Methods A Delphi technique was embedded in multiple rounds of a survey designed to collect and aggregate informed opinions of 49 international respondents with expertise related to motor learning. The survey was administered via an online survey program and accompanied by feedback after each round. Consensus was considered to be reached if ≥70% of the experts agreed on a topic. Results Consensus was reached with respect to definitions of implicit and explicit motor learning, and seven common primary intervention strategies were identified in the context of implicit and explicit motor learning. Consensus was not reached with respect to whether the strategies promote implicit or explicit forms of learning. Discussion The definitions and descriptions agreed upon may aid translation and transfer of knowledge between domains in the field of motor learning. Empirical and clinical research is required to confirm the accuracy of the definitions and to explore the feasibility of the strategies that were identified in research, everyday practice and education. PMID:24968228

  18. Histories approach to general relativity: I. The spacetime character of the canonical description

    NASA Astrophysics Data System (ADS)

    Savvidou, Ntina

    2004-01-01

    The problem of time in canonical quantum gravity is related to the fact that the canonical description is based on the prior choice of a spacelike foliation, hence making a reference to a spacetime metric. However, the metric is expected to be a dynamical, fluctuating quantity in quantum gravity. We show how this problem can be solved in the histories formulation of general relativity. We implement the 3 + 1 decomposition using metric-dependent foliations which remain spacelike with respect to all possible Lorentzian metrics. This allows us to find an explicit relation of covariant and canonical quantities which preserves the spacetime character of the canonical description. In this new construction, we also have the coexistence of the spacetime diffeomorphisms group, Diff(M), and the Dirac algebra of constraints.

  19. Adaptation of object descriptions to a partner under increasing communicative demands: a comparison of children with and without autism.

    PubMed

    Nadig, Aparna; Vivanti, Giacomo; Ozonoff, Sally

    2009-12-01

    This study compared the object descriptions of school-age children with high-functioning autism (HFA) with those of a matched group of typically developing children. Descriptions were elicited in a referential communication task where shared information was manipulated, and in a guessing game where clues had to be provided about the identity of an object that was hidden from the addressee. Across these tasks, increasingly complex levels of audience design were assessed: (1) the ability to give adequate descriptions from one's own perspective, (2) the ability to adjust descriptions to an addressee's perspective when this differs from one's own, and (3) the ability to provide indirect yet identifying descriptions in a situation where explicit labeling is inappropriate. Results showed that there were group differences in all three cases, with the HFA group giving less efficient descriptions with respect to the relevant context than the comparison group. More revealing was the identification of distinct adaptation profiles among the HFA participants: those who had difficulty with all three levels, those who displayed Level 1 audience design but poor Level 2 and Level 3 design, and those demonstrated all three levels of audience design, like the majority of the comparison group. Higher structural language ability, rather than symptom severity or social skills, differentiated those HFA participants with typical adaptation profiles from those who displayed deficient audience design, consistent with previous reports of language use in autism.

  20. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1992-01-01

    The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.

  1. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    NASA Astrophysics Data System (ADS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  2. Explicit and Implicit Verbal Response Inhibition in Preschool-Age Children Who Stutter.

    PubMed

    Anderson, Julie D; Wagovich, Stacy A

    2017-04-14

    The purpose of this study was to examine (a) explicit and implicit verbal response inhibition in preschool children who do stutter (CWS) and do not stutter (CWNS) and (b) the relationship between response inhibition and language skills. Participants were 41 CWS and 41 CWNS between the ages of 3;1 and 6;1 (years;months). Explicit verbal response inhibition was measured using a computerized version of the grass-snow task (Carlson & Moses, 2001), and implicit verbal response inhibition was measured using the baa-meow task. Main dependent variables were reaction time and accuracy. The CWS were significantly less accurate than the CWNS on the implicit task, but not the explicit task. The CWS also exhibited slower reaction times than the CWNS on both tasks. Between-group differences in performance could not be attributed to working memory demands. Overall, children's performance on the inhibition tasks corresponded with parents' perceptions of their children's inhibition skills in daily life. CWS are less effective and efficient than CWNS in suppressing a dominant response while executing a conflicting response in the verbal domain.

  3. Explicit and Implicit Verbal Response Inhibition in Preschool-Age Children Who Stutter

    PubMed Central

    Wagovich, Stacy A.

    2017-01-01

    Purpose The purpose of this study was to examine (a) explicit and implicit verbal response inhibition in preschool children who do stutter (CWS) and do not stutter (CWNS) and (b) the relationship between response inhibition and language skills. Method Participants were 41 CWS and 41 CWNS between the ages of 3;1 and 6;1 (years;months). Explicit verbal response inhibition was measured using a computerized version of the grass–snow task (Carlson & Moses, 2001), and implicit verbal response inhibition was measured using the baa–meow task. Main dependent variables were reaction time and accuracy. Results The CWS were significantly less accurate than the CWNS on the implicit task, but not the explicit task. The CWS also exhibited slower reaction times than the CWNS on both tasks. Between-group differences in performance could not be attributed to working memory demands. Overall, children's performance on the inhibition tasks corresponded with parents' perceptions of their children's inhibition skills in daily life. Conclusions CWS are less effective and efficient than CWNS in suppressing a dominant response while executing a conflicting response in the verbal domain. PMID:28384673

  4. Scalable explicit implementation of anisotropic diffusion with Runge-Kutta-Legendre super-time stepping

    NASA Astrophysics Data System (ADS)

    Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca

    2017-12-01

    An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.

  5. Exact analytic solution for non-linear density fluctuation in a ΛCDM universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Jaiyul; Gong, Jinn-Ouk, E-mail: jyoo@physik.uzh.ch, E-mail: jinn-ouk.gong@apctp.org

    We derive the exact third-order analytic solution of the matter density fluctuation in the proper-time hypersurface in a ΛCDM universe, accounting for the explicit time-dependence and clarifying the relation to the initial condition. Furthermore, we compare our analytic solution to the previous calculation in the comoving gauge, and to the standard Newtonian perturbation theory by providing Fourier kernels for the relativistic effects. Our results provide an essential ingredient for a complete description of galaxy bias in the relativistic context.

  6. Exponential Methods for the Time Integration of Schroedinger Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cano, B.; Gonzalez-Pachon, A.

    2010-09-30

    We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.

  7. Need for speed: An optimized gridding approach for spatially explicit disease simulations.

    PubMed

    Sellman, Stefan; Tsao, Kimberly; Tildesley, Michael J; Brommesson, Peter; Webb, Colleen T; Wennergren, Uno; Keeling, Matt J; Lindström, Tom

    2018-04-01

    Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power.

  8. Need for speed: An optimized gridding approach for spatially explicit disease simulations

    PubMed Central

    Tildesley, Michael J.; Brommesson, Peter; Webb, Colleen T.; Wennergren, Uno; Lindström, Tom

    2018-01-01

    Numerical models for simulating outbreaks of infectious diseases are powerful tools for informing surveillance and control strategy decisions. However, large-scale spatially explicit models can be limited by the amount of computational resources they require, which poses a problem when multiple scenarios need to be explored to provide policy recommendations. We introduce an easily implemented method that can reduce computation time in a standard Susceptible-Exposed-Infectious-Removed (SEIR) model without introducing any further approximations or truncations. It is based on a hierarchical infection process that operates on entire groups of spatially related nodes (cells in a grid) in order to efficiently filter out large volumes of susceptible nodes that would otherwise have required expensive calculations. After the filtering of the cells, only a subset of the nodes that were originally at risk are then evaluated for actual infection. The increase in efficiency is sensitive to the exact configuration of the grid, and we describe a simple method to find an estimate of the optimal configuration of a given landscape as well as a method to partition the landscape into a grid configuration. To investigate its efficiency, we compare the introduced methods to other algorithms and evaluate computation time, focusing on simulated outbreaks of foot-and-mouth disease (FMD) on the farm population of the USA, the UK and Sweden, as well as on three randomly generated populations with varying degree of clustering. The introduced method provided up to 500 times faster calculations than pairwise computation, and consistently performed as well or better than other available methods. This enables large scale, spatially explicit simulations such as for the entire continental USA without sacrificing realism or predictive power. PMID:29624574

  9. A transient FETI methodology for large-scale parallel implicit computations in structural mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier

    1992-01-01

    Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.

  10. Frenkel pair recombinations in UO2: Importance of explicit description of polarizability in core-shell molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Devynck, Fabien; Iannuzzi, Marcella; Krack, Matthias

    2012-05-01

    The oxygen and uranium Frenkel pair (FP) recombination mechanisms are studied in UO2 using an empirical interatomic potential accounting for the polarizability of the ions, namely a dynamical core-shell model. The results are compared to a more conventional rigid-ion model. Both model types have been implemented into the cp2k program package and thoroughly validated. The overall picture indicates that the FP recombination mechanism is a complex process involving several phenomena. The FP recombination can happen instantaneously when the distance between the interstitial and the vacancy is small or can be thermally activated at larger separation distances. However, other criteria can prevail over the interstitial-vacancy distance. The surrounding environment of the FP defect, the mechanical stiffness of the matrix, and the orientation of the migration path are shown to be major factors acting on the FP lifetime. The core-shell and rigid-ion models provide a similar qualitative description of the FP recombination mechanism. However, the FP stabilities determined by both models significantly differ in the lower temperature range considered. Indeed, the recombination time of the oxygen and uranium FPs can be up to an order of magnitude lower in the core-shell model at T=600 K and T=1800 K, respectively. These differences highlight the importance of the explicit description of polarizability on some crucial properties such as the resistance to amorphization. This refined description of the interatomic interactions would certainly affect the description of the recrystallization process following a displacement cascade. In turn, the self-healing phase would be better accounted for in the core-shell model and the misestimate inherent to the lack of polarizability in the rigid-ion model corrected.

  11. A Unified Framework for Monetary Theory and Policy Analysis.

    ERIC Educational Resources Information Center

    Lagos, Ricardo; Wright, Randall

    2005-01-01

    Search-theoretic models of monetary exchange are based on explicit descriptions of the frictions that make money essential. However, tractable versions of these models typically make strong assumptions that render them ill suited for monetary policy analysis. We propose a new framework, based on explicit micro foundations, within which macro…

  12. Damping efficiency of the Tchamwa-Wielgosz explicit dissipative scheme under instantaneous loading conditions

    NASA Astrophysics Data System (ADS)

    Mahéo, Laurent; Grolleau, Vincent; Rio, Gérard

    2009-11-01

    To deal with dynamic and wave propagation problems, dissipative methods are often used to reduce the effects of the spurious oscillations induced by the spatial and time discretization procedures. Among the many dissipative methods available, the Tchamwa-Wielgosz (TW) explicit scheme is particularly useful because it damps out the spurious oscillations occurring in the highest frequency domain. The theoretical study performed here shows that the TW scheme is decentered to the right, and that the damping can be attributed to a nodal displacement perturbation. The FEM study carried out using instantaneous 1-D and 3-D compression loads shows that it is useful to display the damping versus the number of time steps in order to obtain a constant damping efficiency whatever the size of element used for the regular meshing. A study on the responses obtained with irregular meshes shows that the TW scheme is only slightly sensitive to the spatial discretization procedure used. To cite this article: L. Mahéo et al., C. R. Mecanique 337 (2009).

  13. A review of hybrid implicit explicit finite difference time domain method

    NASA Astrophysics Data System (ADS)

    Chen, Juan

    2018-06-01

    The finite-difference time-domain (FDTD) method has been extensively used to simulate varieties of electromagnetic interaction problems. However, because of its Courant-Friedrich-Levy (CFL) condition, the maximum time step size of this method is limited by the minimum size of cell used in the computational domain. So the FDTD method is inefficient to simulate the electromagnetic problems which have very fine structures. To deal with this problem, the Hybrid Implicit Explicit (HIE)-FDTD method is developed. The HIE-FDTD method uses the hybrid implicit explicit difference in the direction with fine structures to avoid the confinement of the fine spatial mesh on the time step size. So this method has much higher computational efficiency than the FDTD method, and is extremely useful for the problems which have fine structures in one direction. In this paper, the basic formulations, time stability condition and dispersion error of the HIE-FDTD method are presented. The implementations of several boundary conditions, including the connect boundary, absorbing boundary and periodic boundary are described, then some applications and important developments of this method are provided. The goal of this paper is to provide an historical overview and future prospects of the HIE-FDTD method.

  14. Excited-state potential-energy surfaces of metal-adsorbed organic molecules from linear expansion Δ-self-consistent field density-functional theory (ΔSCF-DFT).

    PubMed

    Maurer, Reinhard J; Reuter, Karsten

    2013-07-07

    Accurate and efficient simulation of excited state properties is an important and much aspired cornerstone in the study of adsorbate dynamics on metal surfaces. To this end, the recently proposed linear expansion Δ-self-consistent field method by Gavnholt et al. [Phys. Rev. B 78, 075441 (2008)] presents an efficient alternative to time consuming quasi-particle calculations. In this method, the standard Kohn-Sham equations of density-functional theory are solved with the constraint of a non-equilibrium occupation in a region of Hilbert-space resembling gas-phase orbitals of the adsorbate. In this work, we discuss the applicability of this method for the excited-state dynamics of metal-surface mounted organic adsorbates, specifically in the context of molecular switching. We present necessary advancements to allow for a consistent quality description of excited-state potential-energy surfaces (PESs), and illustrate the concept with the application to Azobenzene adsorbed on Ag(111) and Au(111) surfaces. We find that the explicit inclusion of substrate electronic states modifies the topologies of intra-molecular excited-state PESs of the molecule due to image charge and hybridization effects. While the molecule in gas phase shows a clear energetic separation of resonances that induce isomerization and backreaction, the surface-adsorbed molecule does not. The concomitant possibly simultaneous induction of both processes would lead to a significantly reduced switching efficiency of such a mechanism.

  15. A logical foundation for representation of clinical data.

    PubMed Central

    Campbell, K E; Das, A K; Musen, M A

    1994-01-01

    OBJECTIVE: A general framework for representation of clinical data that provides a declarative semantics of terms and that allows developers to define explicitly the relationships among both terms and combinations of terms. DESIGN: Use of conceptual graphs as a standard representation of logic and of an existing standardized vocabulary, the Systematized Nomenclature of Medicine (SNOMED International), for lexical elements. Concepts such as time, anatomy, and uncertainty must be modeled explicitly in a way that allows relation of these foundational concepts to surface-level clinical descriptions in a uniform manner. RESULTS: The proposed framework was used to model a simple radiology report, which included temporal references. CONCLUSION: Formal logic provides a framework for formalizing the representation of medical concepts. Actual implementations will be required to evaluate the practicality of this approach. PMID:7719805

  16. Energy efficient model based algorithm for control of building HVAC systems.

    PubMed

    Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N

    2015-11-01

    Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. A multi-dimensional, energy- and charge-conserving, nonlinearly implicit, electromagnetic Vlasov–Darwin particle-in-cell algorithm

    DOE PAGES

    Chen, G.; Chacón, L.

    2015-08-11

    For decades, the Vlasov–Darwin model has been recognized to be attractive for particle-in-cell (PIC) kinetic plasma simulations in non-radiative electromagnetic regimes, to avoid radiative noise issues and gain computational efficiency. However, the Darwin model results in an elliptic set of field equations that renders conventional explicit time integration unconditionally unstable. We explore a fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions, which overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. The finite-difference scheme for Darwin field equations and particle equations of motion is space–time-centered, employing particle sub-cycling and orbit-averaging. This algorithm conserves total energy, local charge,more » canonical-momentum in the ignorable direction, and preserves the Coulomb gauge exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. Finally, we demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 2D–3V.« less

  18. Reliable oligonucleotide conformational ensemble generation in explicit solvent for force field assessment using reservoir replica exchange molecular dynamics simulations

    PubMed Central

    Henriksen, Niel M.; Roe, Daniel R.; Cheatham, Thomas E.

    2013-01-01

    Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 microseconds of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations. PMID:23477537

  19. Reliable oligonucleotide conformational ensemble generation in explicit solvent for force field assessment using reservoir replica exchange molecular dynamics simulations.

    PubMed

    Henriksen, Niel M; Roe, Daniel R; Cheatham, Thomas E

    2013-04-18

    Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example, by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 μs of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations.

  20. Implicit and Explicit Learning Mechanisms Meet in Monkey Prefrontal Cortex.

    PubMed

    Chafee, Matthew V; Crowe, David A

    2017-10-11

    In this issue, Loonis et al. (2017) provide the first description of unique synchrony patterns differentiating implicit and explicit forms of learning in monkey prefrontal networks. Their results have broad implications for how prefrontal networks integrate the two learning mechanisms to control behavior. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. An explicit microphysics thunderstorm model.

    Treesearch

    R. Solomon; C.M. Medaglia; C. Adamo; S. Dietrick; A. Mugnai; U. Biader Ceipidor

    2005-01-01

    The authors present a brief description of a 1.5-dimensional thunderstorm model with a lightning parameterization that utilizes an explicit microphysical scheme to model lightning-producing clouds. The main intent of this work is to describe the basic microphysical and electrical properties of the model, with a small illustrative section to show how the model may be...

  2. Improving Upon an Empirical Procedure for Characterizing Magnetospheric States

    NASA Astrophysics Data System (ADS)

    Fung, S. F.; Neufeld, J.; Shao, X.

    2012-12-01

    Work is being performed to improve upon an empirical procedure for describing and predicting the states of the magnetosphere [Fung and Shao, 2008]. We showed in our previous paper that the state of the magnetosphere can be described by a quantity called the magnetospheric state vector (MS vector) consisting of a concatenation of a set of driver-state and a set of response-state parameters. The response state parameters are time-shifted individually to account for their nominal response times so that time does not appear as an explicit parameter in the MS prescription. The MS vector is thus conceptually analogous to the set of vital signs for describing the state of health of a human body. In that previous study, we further demonstrated that since response states are results of driver states, then there should be a correspondence between driver and response states. Such correspondence can be used to predict the subsequent response state from any known driver state with a few hours' lead time. In this paper, we investigate a few possible ways to improve the magnetospheric state descriptions and prediction efficiency by including additional driver state parameters, such as solar activity, IMF-Bx and -By, and optimizing parameter bin sizes. Fung, S. F. and X. Shao, Specification of multiple geomagnetic responses to variable solar wind and IMF input, Ann. Geophys., 26, 639-652, 2008.

  3. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  4. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  5. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  6. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less

  7. The predictive effect of empathy and social norms on adolescents' implicit and explicit stigma responses.

    PubMed

    Silke, Charlotte; Swords, Lorraine; Heary, Caroline

    2017-11-01

    Research indicates that adolescents who experience mental health difficulties are frequently stigmatised by their peers. Stigmatisation is associated with a host of negative social and psychological effects, which impacts a young person's well-being. As a result, the development of effective anti-stigma strategies is considered a major research priority. However, in order to design effective stigma reduction strategies, researchers must be informed by an understanding of the factors that influence the expression of stigma. Although evidence suggests that empathy and social norms have a considerable effect on adolescents' social attitudes and behaviours, research has yet to examine whether these factors significantly influence adolescents' responses toward their peers with mental health difficulties. Thus, this study aims to examine whether empathy (cognitive and affective) and peer norms (descriptive and injunctive) influence adolescents' implicit and explicit stigmatising responses toward peers with mental health problems. A total of 570 (221 male and 348 female; 1 non-specified) adolescents, aged between 13 and 18 years (M = 15.51, SD = 1.13), participated in this research. Adolescents read vignettes describing male/female depressed and 'typically developing' peers. Adolescents answered questions assessing their stigmatising responses toward each target, as well as their empathic responding and normative perceptions. A sub-sample of participants (n=173) also completed an IAT assessing their implicit stigmatising responses. Results showed that descriptive norms exerted a substantial effect on adolescents' explicit responses. Cognitive empathy, affective empathy and injunctive norms exerted more limited effects on explicit responses. No significant effects were observed for implicit stigma. Overall, empathy was found to have limited effects on adolescents' explicit and implicit stigmatising responses, which may suggest that other contextual variables moderate the effects of dispositional empathy on responding. In conclusion, these findings suggest that tackling the perception of negative descriptive norms may be an effective strategy for reducing explicit stigmatising responses among adolescents. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Equation-oriented specification of neural models for simulations

    PubMed Central

    Stimberg, Marcel; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain

    2013-01-01

    Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator. PMID:24550820

  9. A 2D-3D strategy for resolving tsunami-generated debris flow in urban environments

    NASA Astrophysics Data System (ADS)

    Birjukovs Canelas, Ricardo; Conde, Daniel; Garcia-Feal, Orlando; João Telhado, Maria; Ferreira, Rui M. L.

    2017-04-01

    The incorporation of solids, either sediment from the natural environment or remains from buildings or infrastructures is a relevant feature of tsunami run-up in urban environments, greatly increasing the destructive potential of tsunami propagation. Two-dimensional (2D) models have been used to assess the propagation of the bore, even in dense urban fronts. Computational advances are introduced in this work, namely a fully lagrangian, 3D description of the fluid-solid flow, coupled with a high performance meshless implementation capable of dealing with large domains and fine discretizations. A Smoothed Particle Hydrodynamics (SPH) Navier-Stokes discretization and a Distributed Contact Discrete Element Method (DCDEM) description of solid-solid interactions provide a state-of the-art fluid-solid flow description. Together with support for arbitrary geometries, centimetre scale resolution simulations of a city section in Lisbon downtown are presented. 2D results are used as boundary conditions for the 3D model, characterizing the incoming wave as it approaches the coast. It is shown that the incoming bore is able to mobilize and incorporate standing vehicles and other urban hardware. Such fully featured simulation provides explicit description of the interactions among fluid, floating debris (vehicles and urban furniture), the buildings and the pavement. The proposed model presents both an innovative research tool for the study of these flows and a powerful and robust approach to study, design and test mitigation solutions at the local scale. At the same time, due to the high time and space resolution of these methodologies, new questions are raised: scenario-building and initial configurations play a crucial role but they do not univocally determine the final configuration of the simulation, as the solution of the Navier-Stokes equations for high Reynolds numbers possesses a high number of degrees of freedom. This calls for conducting the simulations in a statistical framework, involving both initial conditions generation and interpretation of results, which is only attainable under very high standards of computational efficiency. This research as partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 granted by the National Foundation for Science and Technology (FCT).

  10. Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.

    1995-01-01

    The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.

  11. The nature of declarative and nondeclarative knowledge for implicit and explicit learning.

    PubMed

    Kirkhart, M W

    2001-10-01

    Using traditional implicit and explicit artificial-grammar learning tasks, the author investigated the similarities and differences between the acquisition of declarative knowledge under implicit and explicit learning conditions and the functions of the declarative knowledge during testing. Results suggested that declarative knowledge was not predictive of or required for implicit learning but was related to consistency in implicit learning performance. In contrast, declarative knowledge was predictive of and required for explicit learning and was related to consistency in performance. For explicit learning, the declarative knowledge functioned as a guide for other behavior. In contrast, for implicit learning, the declarative knowledge did not serve as a guide for behavior but was instead a post hoc description of the most commonly seen stimuli.

  12. Quantum Monte Carlo studies of solvated systems

    NASA Astrophysics Data System (ADS)

    Schwarz, Kathleen; Letchworth Weaver, Kendra; Arias, T. A.; Hennig, Richard G.

    2011-03-01

    Solvation qualitatively alters the energetics of diverse processes from protein folding to reactions on catalytic surfaces. An explicit description of the solvent in quantum-mechanical calculations requires both a large number of electrons and exploration of a large number of configurations in the phase space of the solvent. These problems can be circumvented by including the effects of solvent through a rigorous classical density-functional description of the liquid environment, thereby yielding free energies and thermodynamic averages directly, while eliminating the need for explicit consideration of the solvent electrons. We have implemented and tested this approach within the CASINO Quantum Monte Carlo code. Our method is suitable for calculations in any basis within CASINO, including b-spline and plane wave trial wavefunctions, and is equally applicable to molecules, surfaces, and crystals. For our preliminary test calculations, we use a simplified description of the solvent in terms of an isodensity continuum dielectric solvation approach, though the method is fully compatible with more reliable descriptions of the solvent we shall employ in the future.

  13. Explicit symplectic algorithms based on generating functions for charged particle dynamics.

    PubMed

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H(x,p)=p_{i}f(x) or H(x,p)=x_{i}g(p). Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  14. Explicit symplectic algorithms based on generating functions for charged particle dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  15. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  16. Nuclear reactor descriptions for space power systems analysis

    NASA Technical Reports Server (NTRS)

    Mccauley, E. W.; Brown, N. J.

    1972-01-01

    For the small, high performance reactors required for space electric applications, adequate neutronic analysis is of crucial importance, but in terms of computational time consumed, nuclear calculations probably yield the least amount of detail for mission analysis study. It has been found possible, after generation of only a few designs of a reactor family in elaborate thermomechanical and nuclear detail to use simple curve fitting techniques to assure desired neutronic performance while still performing the thermomechanical analysis in explicit detail. The resulting speed-up in computation time permits a broad detailed examination of constraints by the mission analyst.

  17. Development of the Semi-implicit Time Integration in KIM-SH

    NASA Astrophysics Data System (ADS)

    NAM, H.

    2015-12-01

    The Korea Institute of Atmospheric Prediction Systems (KIAPS) was founded in 2011 by the Korea Meteorological Administration (KMA) to develop Korea's own global Numerical Weather Prediction (NWP) system as nine year (2011-2019) project. The KIM-SH is a KIAPS integrated model-spectral element based in the HOMME. In KIM-SH, the explicit schemes are employed. We introduce the three- and two-time-level semi-implicit scheme in KIM-SH as the time integration. Explicit schemes however have a tendancy to be unstable and require very small timesteps while semi-implicit schemes are very stable and can have much larger timesteps.We define the linear and reference values, then by definition of semi-implicit scheme, we apply the linear solver as GMRES. The numerical results from experiments will be introduced with the current development status of the time integration in KIM-SH. Several numerical examples are shown to confirm the efficiency and reliability of the proposed schemes.

  18. Universal model for collective access patterns in the Internet traffic dynamics: A superstatistical approach

    NASA Astrophysics Data System (ADS)

    Tamazian, A.; Nguyen, V. D.; Markelov, O. A.; Bogachev, M. I.

    2016-07-01

    We suggest a universal phenomenological description for the collective access patterns in the Internet traffic dynamics both at local and wide area network levels that takes into account erratic fluctuations imposed by cooperative user behaviour. Our description is based on the superstatistical approach and leads to the q-exponential inter-session time and session size distributions that are also in perfect agreement with empirical observations. The validity of the proposed description is confirmed explicitly by the analysis of complete 10-day traffic traces from the WIDE backbone link and from the local campus area network downlink from the Internet Service Provider. Remarkably, the same functional forms have been observed in the historic access patterns from single WWW servers. The suggested approach effectively accounts for the complex interplay of both “calm” and “bursty” user access patterns within a single-model setting. It also provides average sojourn time estimates with reasonable accuracy, as indicated by the queuing system performance simulation, this way largely overcoming the failure of Poisson modelling of the Internet traffic dynamics.

  19. Algebra of implicitly defined constraints for gravity as the general form of embedding theory

    NASA Astrophysics Data System (ADS)

    Paston, S. A.; Semenova, E. N.; Franke, V. A.; Sheykin, A. A.

    2017-01-01

    We consider the embedding theory, the approach to gravity proposed by Regge and Teitelboim, in which 4D space-time is treated as a surface in high-dimensional flat ambient space. In its general form, which does not contain artificially imposed constraints, this theory can be viewed as an extension of GR. In the present paper we study the canonical description of the embedding theory in this general form. In this case, one of the natural constraints cannot be written explicitly, in contrast to the case where additional Einsteinian constraints are imposed. Nevertheless, it is possible to calculate all Poisson brackets with this constraint. We prove that the algebra of four emerging constraints is closed, i.e., all of them are first-class constraints. The explicit form of this algebra is also obtained.

  20. Extended quantum jump description of vibronic two-dimensional spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, Julian; Falge, Mirjam; Keß, Martin

    2015-06-07

    We calculate two-dimensional (2D) vibronic spectra for a model system involving two electronic molecular states. The influence of a bath is simulated using a quantum-jump approach. We use a method introduced by Makarov and Metiu [J. Chem. Phys. 111, 10126 (1999)] which includes an explicit treatment of dephasing. In this way it is possible to characterize the influence of dissipation and dephasing on the 2D-spectra, using a wave function based method. The latter scales with the number of stochastic runs and the number of system eigenstates included in the expansion of the wave-packets to be propagated with the stochastic methodmore » and provides an efficient method for the calculation of the 2D-spectra.« less

  1. Computational methods for structural load and resistance modeling

    NASA Technical Reports Server (NTRS)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  2. Efficient Conformational Sampling in Explicit Solvent Using a Hybrid Replica Exchange Molecular Dynamics Method

    DTIC Science & Technology

    2011-12-01

    REMD while reproducing the energy landscape of explicit solvent simulations . ’ INTRODUCTION Molecular dynamics (MD) simulations of proteins can pro...Mongan, J.; McCammon, J. A. Accelerated molecular dynamics : a promising and efficient simulation method for biomolecules. J. Chem. Phys. 2004, 120 (24...Chemical Theory and Computation ARTICLE (8) Abraham,M. J.; Gready, J. E. Ensuringmixing efficiency of replica- exchange molecular dynamics simulations . J

  3. A Descriptive and Evaluative Analysis of Program Planning Literature, 1950-1983.

    ERIC Educational Resources Information Center

    Sork, Thomas J.; Buskey, John H.

    1986-01-01

    Literature that presents a complete program planning model was described and analyzed using explicitly defined and uniformly applied descriptive and evaluative dimensions. Several observations about the current state of the program planning literature are made, and recommendations designed to strengthen the literature are offered. (Author/CT)

  4. Hemispheric Dissociation and Dyslexia in a Computational Model of Reading

    ERIC Educational Resources Information Center

    Monaghan, Padraic; Shillcock, Richard

    2008-01-01

    There are several causal explanations for dyslexia, drawing on distinctions between dyslexics and control groups at genetic, biological, or cognitive levels of description. However, few theories explicitly bridge these different levels of description. In this paper, we review a long-standing theory that some dyslexics' reading impairments are due…

  5. Accounting for the Decreasing Reaction Potential of Heterogeneous Aquifers in a Stochastic Framework of Aquifer-Scale Reactive Transport

    NASA Astrophysics Data System (ADS)

    Loschko, Matthias; Wöhling, Thomas; Rudolph, David L.; Cirpka, Olaf A.

    2018-01-01

    Many groundwater contaminants react with components of the aquifer matrix, causing a depletion of the aquifer's reactivity with time. We discuss conceptual simplifications of reactive transport that allow the implementation of a decreasing reaction potential in reactive-transport simulations in chemically and hydraulically heterogeneous aquifers without relying on a fully explicit description. We replace spatial coordinates by travel-times and use the concept of relative reactivity, which represents the reaction-partner supply from the matrix relative to a reference. Microorganisms facilitating the reactions are not explicitly modeled. Solute mixing is neglected. Streamlines, obtained by particle tracking, are discretized in travel-time increments with variable content of reaction partners in the matrix. As exemplary reactive system, we consider aerobic respiration and denitrification with simplified reaction equations: Dissolved oxygen undergoes conditional zero-order decay, nitrate follows first-order decay, which is inhibited in the presence of dissolved oxygen. Both reactions deplete the bioavailable organic carbon of the matrix, which in turn determines the relative reactivity. These simplifications reduce the computational effort, facilitating stochastic simulations of reactive transport on the aquifer scale. In a one-dimensional test case with a more detailed description of the reactions, we derive a potential relationship between the bioavailable organic-carbon content and the relative reactivity. In a three-dimensional steady-state test case, we use the simplified model to calculate the decreasing denitrification potential of an artificial aquifer over 200 years in an ensemble of 200 members. We demonstrate that the uncertainty in predicting the nitrate breakthrough in a heterogeneous aquifer decreases with increasing scale of observation.

  6. A Markovian event-based framework for stochastic spiking neural networks.

    PubMed

    Touboul, Jonathan D; Faugeras, Olivier D

    2011-11-01

    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.

  7. Flory-type theories of polymer chains under different external stimuli

    NASA Astrophysics Data System (ADS)

    Budkov, Yu A.; Kiselev, M. G.

    2018-01-01

    In this Review, we present a critical analysis of various applications of the Flory-type theories to a theoretical description of the conformational behavior of single polymer chains in dilute polymer solutions under a few external stimuli. Different theoretical models of flexible polymer chains in the supercritical fluid are discussed and analysed. Different points of view on the conformational behavior of the polymer chain near the liquid-gas transition critical point of the solvent are presented. A theoretical description of the co-solvent-induced coil-globule transitions within the implicit-solvent-explicit-co-solvent models is discussed. Several explicit-solvent-explicit-co-solvent theoretical models of the coil-to-globule-to-coil transition of the polymer chain in a mixture of good solvents (co-nonsolvency) are analysed and compared with each other. Finally, a new theoretical model of the conformational behavior of the dielectric polymer chain under the external constant electric field in the dilute polymer solution with an explicit account for the many-body dipole correlations is discussed. The polymer chain collapse induced by many-body dipole correlations of monomers in the context of statistical thermodynamics of dielectric polymers is analysed.

  8. Toward Modeling the Learner's Personality Using Educational Games

    ERIC Educational Resources Information Center

    Essalmi, Fathi; Tlili, Ahmed; Ben Ayed, Leila Jemni; Jemmi, Mohamed

    2017-01-01

    Learner modeling is a crucial step in the learning personalization process. It allows taking into consideration the learner's profile to make the learning process more efficient. Most studies refer to an explicit method, namely questionnaire, to model learners. Questionnaires are time consuming and may not be motivating for learners. Thus, this…

  9. Integrating planning perception and action for informed object search.

    PubMed

    Manso, Luis J; Gutierrez, Marco A; Bustos, Pablo; Bachiller, Pilar

    2018-05-01

    This paper presents a method to reduce the time spent by a robot with cognitive abilities when looking for objects in unknown locations. It describes how machine learning techniques can be used to decide which places should be inspected first, based on images that the robot acquires passively. The proposal is composed of two concurrent processes. The first one uses the aforementioned images to generate a description of the types of objects found in each object container seen by the robot. This is done passively, regardless of the task being performed. The containers can be tables, boxes, shelves or any other kind of container of known shape whose contents can be seen from a distance. The second process uses the previously computed estimation of the contents of the containers to decide which is the most likely container having the object to be found. This second process is deliberative and takes place only when the robot needs to find an object, whether because it is explicitly asked to locate one or because it is needed as a step to fulfil the mission of the robot. Upon failure to guess the right container, the robot can continue making guesses until the object is found. Guesses are made based on the semantic distance between the object to find and the description of the types of the objects found in each object container. The paper provides quantitative results comparing the efficiency of the proposed method and two base approaches.

  10. A high-order Lagrangian-decoupling method for the incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Ho, Lee-Wing; Maday, Yvon; Patera, Anthony T.; Ronquist, Einar M.

    1989-01-01

    A high-order Lagrangian-decoupling method is presented for the unsteady convection-diffusion and incompressible Navier-Stokes equations. The method is based upon: (1) Lagrangian variational forms that reduce the convection-diffusion equation to a symmetric initial value problem; (2) implicit high-order backward-differentiation finite-difference schemes for integration along characteristics; (3) finite element or spectral element spatial discretizations; and (4) mesh-invariance procedures and high-order explicit time-stepping schemes for deducing function values at convected space-time points. The method improves upon previous finite element characteristic methods through the systematic and efficient extension to high order accuracy, and the introduction of a simple structure-preserving characteristic-foot calculation procedure which is readily implemented on modern architectures. The new method is significantly more efficient than explicit-convection schemes for the Navier-Stokes equations due to the decoupling of the convection and Stokes operators and the attendant increase in temporal stability. Numerous numerical examples are given for the convection-diffusion and Navier-Stokes equations for the particular case of a spectral element spatial discretization.

  11. Nonadiabatic dynamics of electron transfer in solution: Explicit and implicit solvent treatments that include multiple relaxation time scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu

    2014-01-21

    The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less

  12. On simulating flow with multiple time scales using a method of averages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, L.G.

    1997-12-31

    The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his newmore » method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.« less

  13. Selective, Embedded, Just-In-Time Specialization (SEJITS): Portable Parallel Performance from Sequential, Productive, Embedded Domain-Specific Languages

    DTIC Science & Technology

    2012-12-01

    identity operation SIMD Single instruction, multiple datastream parallel computing Scala A byte-compiled programming language featuring dynamic type...Specific Languages 5a. CONTRACT NUMBER FA8750-10-1-0191 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) Armando Fox 5d...application performance, but usually must rely on efficiency programmers who are experts in explicit parallel programming to achieve it. Since such efficiency

  14. A comparative analysis of massed vs. distributed practice on basic math fact fluency growth rates.

    PubMed

    Schutte, Greg M; Duhon, Gary J; Solomon, Benjamin G; Poncy, Brian C; Moore, Kathryn; Story, Bailey

    2015-04-01

    To best remediate academic deficiencies, educators need to not only identify empirically validated interventions but also be able to apply instructional modifications that result in more efficient student learning. The current study compared the effect of massed and distributed practice with an explicit timing intervention to evaluate the extent to which these modifications lead to increased math fact fluency on basic addition problems. Forty-eight third-grade students were placed into one of three groups with each of the groups completing four 1-min math explicit timing procedures each day across 19 days. Group one completed all four 1-min timings consecutively; group two completed two back-to-back 1-min timings in the morning and two back-to-back 1-min timings in the afternoon, and group three completed one, 1-min independent timing four times distributed across the day. Growth curve modeling was used to examine the progress throughout the course of the study. Results suggested that students in the distributed practice conditions, both four times per day and two times per day, showed significantly higher fluency growth rates than those practicing only once per day in a massed format. These results indicate that combining distributed practice with explicit timing procedures is a useful modification that enhances student learning without the addition of extra instructional time when targeting math fact fluency. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  15. Computationally Efficient Multiscale Reactive Molecular Dynamics to Describe Amino Acid Deprotonation in Proteins

    PubMed Central

    2016-01-01

    An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942

  16. Computationally Efficient Multiscale Reactive Molecular Dynamics to Describe Amino Acid Deprotonation in Proteins.

    PubMed

    Lee, Sangyun; Liang, Ruibin; Voth, Gregory A; Swanson, Jessica M J

    2016-02-09

    An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput. 2014, 10, 2729-2737), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H(+)/Cl(-) antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins.

  17. A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method

    NASA Astrophysics Data System (ADS)

    Zhan, Lei; Xiong, Juntao; Liu, Feng

    2016-05-01

    The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.

  18. Efficient electron open boundaries for simulating electrochemical cells

    NASA Astrophysics Data System (ADS)

    Zauchner, Mario G.; Horsfield, Andrew P.; Todorov, Tchavdar N.

    2018-01-01

    Nonequilibrium electrochemistry raises new challenges for atomistic simulation: we need to perform molecular dynamics for the nuclear degrees of freedom with an explicit description of the electrons, which in turn must be free to enter and leave the computational cell. Here we present a limiting form for electron open boundaries that we expect to apply when the magnitude of the electric current is determined by the drift and diffusion of ions in a solution and which is sufficiently computationally efficient to be used with molecular dynamics. We present tight-binding simulations of a parallel-plate capacitor with nothing, a dimer, or an atomic wire situated in the space between the plates. These simulations demonstrate that this scheme can be used to perform molecular dynamics simulations when there is an applied bias between two metal plates with, at most, weak electronic coupling between them. This simple system captures some of the essential features of an electrochemical cell, suggesting this approach might be suitable for simulations of electrochemical cells out of equilibrium.

  19. Complex-envelope alternating-direction-implicit FDTD method for simulating active photonic devices with semiconductor/solid-state media.

    PubMed

    Singh, Gurpreet; Ravi, Koustuban; Wang, Qian; Ho, Seng-Tiong

    2012-06-15

    A complex-envelope (CE) alternating-direction-implicit (ADI) finite-difference time-domain (FDTD) approach to treat light-matter interaction self-consistently with electromagnetic field evolution for efficient simulations of active photonic devices is presented for the first time (to our best knowledge). The active medium (AM) is modeled using an efficient multilevel system of carrier rate equations to yield the correct carrier distributions, suitable for modeling semiconductor/solid-state media accurately. To include the AM in the CE-ADI-FDTD method, a first-order differential system involving CE fields in the AM is first set up. The system matrix that includes AM parameters is then split into two time-dependent submatrices that are then used in an efficient ADI splitting formula. The proposed CE-ADI-FDTD approach with AM takes 22% of the time as the approach of the corresponding explicit FDTD, as validated by semiconductor microdisk laser simulations.

  20. Reduced-Density-Matrix Description of Decoherence and Relaxation Processes for Electron-Spin Systems

    NASA Astrophysics Data System (ADS)

    Jacobs, Verne

    2017-04-01

    Electron-spin systems are investigated using a reduced-density-matrix description. Applications of interest include trapped atomic systems in optical lattices, semiconductor quantum dots, and vacancy defect centers in solids. Complimentary time-domain (equation-of-motion) and frequency-domain (resolvent-operator) formulations are self-consistently developed. The general non-perturbative and non-Markovian formulations provide a fundamental framework for systematic evaluations of corrections to the standard Born (lowest-order-perturbation) and Markov (short-memory-time) approximations. Particular attention is given to decoherence and relaxation processes, as well as spectral-line broadening phenomena, that are induced by interactions with photons, phonons, nuclear spins, and external electric and magnetic fields. These processes are treated either as coherent interactions or as environmental interactions. The environmental interactions are incorporated by means of the general expressions derived for the time-domain and frequency-domain Liouville-space self-energy operators, for which the tetradic-matrix elements are explicitly evaluated in the diagonal-resolvent, lowest-order, and Markov (short-memory time) approximations. Work supported by the Office of Naval Research through the Basic Research Program at The Naval Research Laboratory.

  1. A comparison of general and descriptive praise in teaching intraverbal behavior to children with autism.

    PubMed

    Polick, Amy S; Carr, James E; Hanney, Nicole M

    2012-01-01

    Descriptive praise has been recommended widely as an important teaching tactic for children with autism, despite the absence of published supporting evidence. We compared the effects of descriptive and general praise on the acquisition and maintenance of intraverbal skills with 2 children with autism. The results showed slight advantages of descriptive praise in teaching efficiency in the majority of comparisons; however, these effects dissipated over time.

  2. A Three-Stage Model of Housing Search,

    DTIC Science & Technology

    1980-05-01

    Hanushek and Quigley, 1978) that recognize housing search as a transaction cost but rarely - .. examine search behavior; and descriptive studies of search...explicit mobility models that have recently appeared in the liter- ature (Speare et al., 1975; Hanushek and Quigley, 1978; Brummell, 1979). Although...1978; Hanushek and Quigley, 1978; Cronin, 1978). By explicitly assigning dollar values, the economic models attempt to obtain an objective measure of

  3. Formulation of boundary conditions for the multigrid acceleration of the Euler and Navier Stokes equations

    NASA Technical Reports Server (NTRS)

    Jentink, Thomas Neil; Usab, William J., Jr.

    1990-01-01

    An explicit, Multigrid algorithm was written to solve the Euler and Navier-Stokes equations with special consideration given to the coarse mesh boundary conditions. These are formulated in a manner consistent with the interior solution, utilizing forcing terms to prevent coarse-mesh truncation error from affecting the fine-mesh solution. A 4-Stage Hybrid Runge-Kutta Scheme is used to advance the solution in time, and Multigrid convergence is further enhanced by using local time-stepping and implicit residual smoothing. Details of the algorithm are presented along with a description of Jameson's standard Multigrid method and a new approach to formulating the Multigrid equations.

  4. Classical integrable defects as quasi Bäcklund transformations

    NASA Astrophysics Data System (ADS)

    Doikou, Anastasia

    2016-10-01

    We consider the algebraic setting of classical defects in discrete and continuous integrable theories. We derive the ;equations of motion; on the defect point via the space-like and time-like description. We then exploit the structural similarity of these equations with the discrete and continuous Bäcklund transformations. And although these equations are similar they are not exactly the same to the Bäcklund transformations. We also consider specific examples of integrable models to demonstrate our construction, i.e. the Toda chain and the sine-Gordon model. The equations of the time (space) evolution of the defect (discontinuity) degrees of freedom for these models are explicitly derived.

  5. Optimal routing of hazardous substances in time-varying, stochastic transportation networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, A.L.; Miller-Hooks, E.; Mahmassani, H.S.

    This report is concerned with the selection of routes in a network along which to transport hazardous substances, taking into consideration several key factors pertaining to the cost of transport and the risk of population exposure in the event of an accident. Furthermore, the fact that travel time and the risk measures are not constant over time is explicitly recognized in the routing decisions. Existing approaches typically assume static conditions, possibly resulting in inefficient route selection and unnecessary risk exposure. The report described the application of recent advances in network analysis methodologies to the problem of routing hazardous substances. Severalmore » specific problem formulations are presented, reflecting different degrees of risk aversion on the part of the decision-maker, as well as different possible operational scenarios. All procedures explicitly consider travel times and travel costs (including risk measures) to be stochastic time-varying quantities. The procedures include both exact algorithms, which may require extensive computational effort in some situations, as well as more efficient heuristics that may not guarantee a Pareto-optimal solution. All procedures are systematically illustrated for an example application using the Texas highway network, for both normal and incident condition scenarios. The application illustrates the trade-offs between the information obtained in the solution and computational efficiency, and highlights the benefits of incorporating these procedures in a decision-support system for hazardous substance shipment routing decisions.« less

  6. A high-order semi-explicit discontinuous Galerkin solver for 3D incompressible flow with application to DNS and LES of turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Krank, Benjamin; Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-11-01

    We present an efficient discontinuous Galerkin scheme for simulation of the incompressible Navier-Stokes equations including laminar and turbulent flow. We consider a semi-explicit high-order velocity-correction method for time integration as well as nodal equal-order discretizations for velocity and pressure. The non-linear convective term is treated explicitly while a linear system is solved for the pressure Poisson equation and the viscous term. The key feature of our solver is a consistent penalty term reducing the local divergence error in order to overcome recently reported instabilities in spatially under-resolved high-Reynolds-number flows as well as small time steps. This penalty method is similar to the grad-div stabilization widely used in continuous finite elements. We further review and compare our method to several other techniques recently proposed in literature to stabilize the method for such flow configurations. The solver is specifically designed for large-scale computations through matrix-free linear solvers including efficient preconditioning strategies and tensor-product elements, which have allowed us to scale this code up to 34.4 billion degrees of freedom and 147,456 CPU cores. We validate our code and demonstrate optimal convergence rates with laminar flows present in a vortex problem and flow past a cylinder and show applicability of our solver to direct numerical simulation as well as implicit large-eddy simulation of turbulent channel flow at Reτ = 180 as well as 590.

  7. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  8. On the Helix Propensity in Generalized Born Solvent Descriptions of Modeling the Dark Proteome

    DTIC Science & Technology

    2017-01-10

    benchmarks of conformational sampling methods and their all-atom force fields plus solvent descriptions to accurately model structural transitions on a...atom simulations of proteins is the replacement of explicit water interactions with a continuum description of treating implicitly the bulk physical... structure was reported by Amarasinghe and coworkers (Leung et al., 2015) of the Ebola nucleoprotein NP in complex with a 28-residue peptide extracted

  9. Simple proof of equivalence between adiabatic quantum computation and the circuit model.

    PubMed

    Mizel, Ari; Lidar, Daniel A; Mitchell, Morgan

    2007-08-17

    We prove the equivalence between adiabatic quantum computation and quantum computation in the circuit model. An explicit adiabatic computation procedure is given that generates a ground state from which the answer can be extracted. The amount of time needed is evaluated by computing the gap. We show that the procedure is computationally efficient.

  10. Innovations in individual feature history management - The significance of feature-based temporal model

    USGS Publications Warehouse

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  11. Optimal generalized multistep integration formulae for real-time digital simulation

    NASA Technical Reports Server (NTRS)

    Moerder, D. D.; Halyo, N.

    1985-01-01

    The problem of discretizing a dynamical system for real-time digital simulation is considered. Treating the system and its simulation as stochastic processes leads to a statistical characterization of simulator fidelity. A plant discretization procedure based on an efficient matrix generalization of explicit linear multistep discrete integration formulae is introduced, which minimizes a weighted sum of the mean squared steady-state and transient error between the system and simulator outputs.

  12. Expectancy effects in source memory: how moving to a bad neighborhood can change your memory.

    PubMed

    Kroneisen, Meike; Woehe, Larissa; Rausch, Leonie Sophie

    2015-02-01

    Enhanced memory for cheaters could be suited to avoid social exchange situations in which we run the risk of getting exploited by others. Several experiments demonstrated that we have better source memory for faces combined with negative rather than positive behavior (Bell & Buchner, Memory & Cognition, 38, 29-41, 2010) or for cheaters and cooperators showing unexpected behavior (Bell, Buchner, Kroneisen, Giang, Journal of Experimental Psychology: Learning, Memory, and Cognition, 38, 1512-1529, 2012). In the present study, we compared two groups: Group 1 just saw faces combined with aggressive, prosocial or neutral behavior descriptions, but got no further information, whereas group 2 was explicitly told that they would now see the behavior descriptions of very aggressive and unsocial persons. To measure old-new discrimination, source memory, and guessing biases separately, we used a multinomial model. When having no expectancies about the behavior of the presented people, enhanced source memory for aggressive persons was found. In comparison, source memory for faces combined with prosocial behavior descriptions was significantly higher in the group expecting only aggressive persons. These findings can be attributed to a mechanism that focuses on expectancy-incongruent information, representing a more flexible and therefore efficient memory strategy for remembering exchange-relevant information.

  13. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less

  14. A GPU-accelerated implicit meshless method for compressible flows

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng

    2018-05-01

    This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.

  15. Effective orthorhombic anisotropic models for wavefield extrapolation

    NASA Astrophysics Data System (ADS)

    Ibanez-Jacome, Wilson; Alkhalifah, Tariq; Waheed, Umair bin

    2014-09-01

    Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth's subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.

  16. Using travel times to simulate multi-dimensional bioreactive transport in time-periodic flows.

    PubMed

    Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A

    2016-04-01

    In travel-time models, the spatially explicit description of reactive transport is replaced by associating reactive-species concentrations with the travel time or groundwater age at all locations. These models have been shown adequate for reactive transport in river-bank filtration under steady-state flow conditions. Dynamic hydrological conditions, however, can lead to fluctuations of infiltration velocities, putting the validity of travel-time models into question. In transient flow, the local travel-time distributions change with time. We show that a modified version of travel-time based reactive transport models is valid if only the magnitude of the velocity fluctuates, whereas its spatial orientation remains constant. We simulate nonlinear, one-dimensional, bioreactive transport involving oxygen, nitrate, dissolved organic carbon, aerobic and denitrifying bacteria, considering periodic fluctuations of velocity. These fluctuations make the bioreactive system pulsate: The aerobic zone decreases at times of low velocity and increases at those of high velocity. For the case of diurnal fluctuations, the biomass concentrations cannot follow the hydrological fluctuations and a transition zone containing both aerobic and obligatory denitrifying bacteria is established, whereas a clear separation of the two types of bacteria prevails in the case of seasonal velocity fluctuations. We map the 1-D results to a heterogeneous, two-dimensional domain by means of the mean groundwater age for steady-state flow in both domains. The mapped results are compared to simulation results of spatially explicit, two-dimensional, advective-dispersive-bioreactive transport subject to the same relative fluctuations of velocity as in the one-dimensional model. The agreement between the mapped 1-D and the explicit 2-D results is excellent. We conclude that travel-time models of nonlinear bioreactive transport are adequate in systems of time-periodic flow if the flow direction does not change. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Fully implicit Particle-in-cell algorithms for multiscale plasma simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis

    The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less

  18. Moderating Effects of Mathematics Anxiety on the Effectiveness of Explicit Timing

    ERIC Educational Resources Information Center

    Grays, Sharnita D.; Rhymer, Katrina N.; Swartzmiller, Melissa D.

    2017-01-01

    Explicit timing is an empirically validated intervention to increase problem completion rates by exposing individuals to a stopwatch and explicitly telling them of the time limit for the assignment. Though explicit timing has proven to be effective for groups of students, some students may not respond well to explicit timing based on factors such…

  19. Highly parallel implementation of non-adiabatic Ehrenfest molecular dynamics

    NASA Astrophysics Data System (ADS)

    Kanai, Yosuke; Schleife, Andre; Draeger, Erik; Anisimov, Victor; Correa, Alfredo

    2014-03-01

    While the adiabatic Born-Oppenheimer approximation tremendously lowers computational effort, many questions in modern physics, chemistry, and materials science require an explicit description of coupled non-adiabatic electron-ion dynamics. Electronic stopping, i.e. the energy transfer of a fast projectile atom to the electronic system of the target material, is a notorious example. We recently implemented real-time time-dependent density functional theory based on the plane-wave pseudopotential formalism in the Qbox/qb@ll codes. We demonstrate that explicit integration using a fourth-order Runge-Kutta scheme is very suitable for modern highly parallelized supercomputers. Applying the new implementation to systems with hundreds of atoms and thousands of electrons, we achieved excellent performance and scalability on a large number of nodes both on the BlueGene based ``Sequoia'' system at LLNL as well as the Cray architecture of ``Blue Waters'' at NCSA. As an example, we discuss our work on computing the electronic stopping power of aluminum and gold for hydrogen projectiles, showing an excellent agreement with experiment. These first-principles calculations allow us to gain important insight into the the fundamental physics of electronic stopping.

  20. How Far Is "Near"? Inferring Distance from Spatial Descriptions

    ERIC Educational Resources Information Center

    Carlson, Laura A.; Covey, Eric S.

    2005-01-01

    A word may mean different things in different contexts. The current study explored the changing denotations of spatial terms, focusing on how the distance inferred from a spatial description varied as a function of the size of the objects being spatially related. We examined both terms that explicitly convey distance (i.e., topological terms such…

  1. Student perceptions regarding the usefulness of explicit discussion of "Structure of the Observed Learning Outcome" taxonomy.

    PubMed

    Prakash, E S; Narayan, K A; Sethuraman, K R

    2010-09-01

    One method of grading responses of the descriptive type is by using Structure of Observed Learning Outcomes (SOLO) taxonomy. The basis of this study was the expectation that if students were oriented to SOLO taxonomy, it would provide them an opportunity to understand some of the factors that teachers consider while grading descriptive responses and possibly develop strategies to improve scores. We first sampled the perceptions of 68 second-year undergraduate medical students doing the Respiratory System course regarding the usefulness of explicit discussion of SOLO taxonomy. Subsequently, in a distinct cohort of 20 second-year medical students doing the Central Nervous System course, we sought to determine whether explicit illustration of SOLO taxonomy combined with some advice on better answering descriptive test questions (to an experimental group) resulted in better student scores in a continuous assessment test compared with providing advice for better answering test questions but without any reference to SOLO taxonomy (the control group). Student ratings of the clarity of the presentation on SOLO taxonomy appeared satisfactory to the authors, as was student understanding of our presentation. The majority of participants indicated that knowledge of SOLO taxonomy would help them study and prepare better answers for questions of the descriptive type. Although scores in the experimental and control group were comparable, this experience nonetheless provided us with the motivation to orient students to SOLO taxonomy early on in the medical program and further research factors that affect students' development of strategies based on knowledge of SOLO taxonomy.

  2. Fokker-Planck description for the queue dynamics of large tick stocks.

    PubMed

    Garèche, A; Disdier, G; Kockelkoren, J; Bouchaud, J-P

    2013-09-01

    Motivated by empirical data, we develop a statistical description of the queue dynamics for large tick assets based on a two-dimensional Fokker-Planck (diffusion) equation. Our description explicitly includes state dependence, i.e., the fact that the drift and diffusion depend on the volume present on both sides of the spread. "Jump" events, corresponding to sudden changes of the best limit price, must also be included as birth-death terms in the Fokker-Planck equation. All quantities involved in the equation can be calibrated using high-frequency data on the best quotes. One of our central findings is that the dynamical process is approximately scale invariant, i.e., the only relevant variable is the ratio of the current volume in the queue to its average value. While the latter shows intraday seasonalities and strong variability across stocks and time periods, the dynamics of the rescaled volumes is universal. In terms of rescaled volumes, we found that the drift has a complex two-dimensional structure, which is a sum of a gradient contribution and a rotational contribution, both stable across stocks and time. This drift term is entirely responsible for the dynamical correlations between the ask queue and the bid queue.

  3. Fokker-Planck description for the queue dynamics of large tick stocks

    NASA Astrophysics Data System (ADS)

    Garèche, A.; Disdier, G.; Kockelkoren, J.; Bouchaud, J.-P.

    2013-09-01

    Motivated by empirical data, we develop a statistical description of the queue dynamics for large tick assets based on a two-dimensional Fokker-Planck (diffusion) equation. Our description explicitly includes state dependence, i.e., the fact that the drift and diffusion depend on the volume present on both sides of the spread. “Jump” events, corresponding to sudden changes of the best limit price, must also be included as birth-death terms in the Fokker-Planck equation. All quantities involved in the equation can be calibrated using high-frequency data on the best quotes. One of our central findings is that the dynamical process is approximately scale invariant, i.e., the only relevant variable is the ratio of the current volume in the queue to its average value. While the latter shows intraday seasonalities and strong variability across stocks and time periods, the dynamics of the rescaled volumes is universal. In terms of rescaled volumes, we found that the drift has a complex two-dimensional structure, which is a sum of a gradient contribution and a rotational contribution, both stable across stocks and time. This drift term is entirely responsible for the dynamical correlations between the ask queue and the bid queue.

  4. Solvent effects on the properties of hyperbranched polythiophenes.

    PubMed

    Torras, Juan; Zanuy, David; Aradilla, David; Alemán, Carlos

    2016-09-21

    The structural and electronic properties of all-thiophene dendrimers and dendrons in solution have been evaluated using very different theoretical approaches based on quantum mechanical (QM) and hybrid QM/molecular mechanics (MM) methodologies: (i) calculations on minimum energy conformations using an implicit solvation model in combination with density functional theory (DFT) or time-dependent DFT (TD-DFT) methods; (ii) hybrid QM/MM calculations, in which the solute and solvent molecules are represented at the DFT level as point charges, respectively, on snapshots extracted from classical molecular dynamics (MD) simulations using explicit solvent molecules, and (iii) QM/MM-MD trajectories in which the solute is described at the DFT or TD-DFT level and the explicit solvent molecules are represented using classical force-fields. Calculations have been performed in dichloromethane, tetrahydrofuran and dimethylformamide. A comparison of the results obtained using the different approaches with the available experimental data indicates that the incorporation of effects associated with both the conformational dynamics of the dendrimer and the explicit solvent molecules is strictly necessary to satisfactorily reproduce the properties of the investigated systems. Accordingly, QM/MM-MD simulations are able to capture such effects providing a reliable description of electronic properties-conformational flexibility relationships in all-Th dendrimers.

  5. Application of Knowledge Management: Pressing questions and practical answers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FROMM-LEWIS,MICHELLE

    2000-02-11

    Sandia National Laboratory are working on ways to increase production using Knowledge Management. Knowledge Management is: finding ways to create, identify, capture, and distribute organizational knowledge to the people who need it; to help information and knowledge flow to the right people at the right time so they can act more efficiently and effectively; recognizing, documenting and distributing explicit knowledge (explicit knowledge is quantifiable and definable, it makes up reports, manuals, instructional materials, etc.) and tacit knowledge (tacit knowledge is doing and performing, it is a combination of experience, hunches, intuition, emotions, and beliefs) in order to improve organizational performancemore » and a systematic approach to find, understand and use knowledge to create value.« less

  6. Feature highlighting enhances learning of a complex natural-science category.

    PubMed

    Miyatsu, Toshiya; Gouravajhala, Reshma; Nosofsky, Robert M; McDaniel, Mark A

    2018-04-26

    Learning naturalistic categories, which tend to have fuzzy boundaries and vary on many dimensions, can often be harder than learning well defined categories. One method for facilitating the category learning of naturalistic stimuli may be to provide explicit feature descriptions that highlight the characteristic features of each category. Although this method is commonly used in textbooks and classrooms, theoretically it remains uncertain whether feature descriptions should advantage learning complex natural-science categories. In three experiments, participants were trained on 12 categories of rocks, either without or with a brief description highlighting key features of each category. After training, they were tested on their ability to categorize both old and new rocks from each of the categories. Providing feature descriptions as a caption under a rock image failed to improve category learning relative to providing only the rock image with its category label (Experiment 1). However, when these same feature descriptions were presented such that they were explicitly linked to the relevant parts of the rock image (feature highlighting), participants showed significantly higher performance on both immediate generalization to new rocks (Experiment 2) and generalization after a 2-day delay (Experiment 3). Theoretical and practical implications are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Replica exchange with solute tempering: A method for sampling biological systems in explicit water

    NASA Astrophysics Data System (ADS)

    Liu, Pu; Kim, Byungchan; Friesner, Richard A.; Berne, B. J.

    2005-09-01

    An innovative replica exchange (parallel tempering) method called replica exchange with solute tempering (REST) for the efficient sampling of aqueous protein solutions is presented here. The method bypasses the poor scaling with system size of standard replica exchange and thus reduces the number of replicas (parallel processes) that must be used. This reduction is accomplished by deforming the Hamiltonian function for each replica in such a way that the acceptance probability for the exchange of replica configurations does not depend on the number of explicit water molecules in the system. For proof of concept, REST is compared with standard replica exchange for an alanine dipeptide molecule in water. The comparisons confirm that REST greatly reduces the number of CPUs required by regular replica exchange and increases the sampling efficiency. This method reduces the CPU time required for calculating thermodynamic averages and for the ab initio folding of proteins in explicit water. Author contributions: B.J.B. designed research; P.L. and B.K. performed research; P.L. and B.K. analyzed data; and P.L., B.K., R.A.F., and B.J.B. wrote the paper.Abbreviations: REST, replica exchange with solute tempering; REM, replica exchange method; MD, molecular dynamics.*P.L. and B.K. contributed equally to this work.

  8. Density-Functional Theory with Optimized Effective Potential and Self-Interaction Correction for the Double Ionization of He and Be Atoms

    NASA Astrophysics Data System (ADS)

    Heslar, John; Telnov, Dmitry; Chu, Shih-I.

    2012-06-01

    We present a self-interaction-free (SIC) time-dependent density-functional theory (TDDFT) for the treatment of double ionization processes of many-electron systems. The method is based on the Krieger-Li-Iafrate (KLI) treatment of the optimized effective potential (OEP) theory and the incorporation of an explicit self-interaction correction (SIC) term. In the framework of the time-dependent density functional theory, we have performed 3D calculations of double ionization of He and Be atoms by strong near-infrared laser fields. We make use of the exchange-correlation potential with the integer discontinuity which improves the description of the double ionization process. We found that proper description of the double ionization requires the TDDFT exchange-correlation potential with the discontinuity with respect to the variation of the spin particle numbers (SPN) only. The results for the intensity-dependent probabilities of single and double ionization are presented and reproduce the famous ``knee'' structure.

  9. Energy efficient quantum machines

    NASA Astrophysics Data System (ADS)

    Abah, Obinna; Lutz, Eric

    2017-05-01

    We investigate the performance of a quantum thermal machine operating in finite time based on shortcut-to-adiabaticity techniques. We compute efficiency and power for a paradigmatic harmonic quantum Otto engine by taking the energetic cost of the shortcut driving explicitly into account. We demonstrate that shortcut-to-adiabaticity machines outperform conventional ones for fast cycles. We further derive generic upper bounds on both quantities, valid for any heat engine cycle, using the notion of quantum speed limit for driven systems. We establish that these quantum bounds are tighter than those stemming from the second law of thermodynamics.

  10. Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.

  11. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    Discrete earth models are commonly represented by uniform structured grids. In order to ensure accurate numerical description of all wave components propagating through these uniform grids, the grid size must be determined by the slowest velocity of the entire model. Consequently, high velocity areas are always oversampled, which inevitably increases the computational cost. A practical solution to this problem is to use nonuniform grids. We propose a nonuniform grid implicit spatial finite difference method which utilizes nonuniform grids to obtain high efficiency and relies on implicit operators to achieve high accuracy. We present a simple way of deriving implicit finite difference operators of arbitrary stencil widths on general nonuniform grids for the first and second derivatives and, as a demonstration example, apply these operators to the pseudo-acoustic wave equation in tilted transversely isotropic (TTI) media. We propose an efficient gridding algorithm that can be used to convert uniformly sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced efficiency, compared to uniform grid explicit finite difference implementations.

  12. Time-dependent density-functional theory with optimized effective potential and self-interaction correction and derivative discontinuity for the treatment of double ionization of He and Be atoms in intense laser fields

    NASA Astrophysics Data System (ADS)

    Heslar, John; Telnov, Dmitry A.; Chu, Shih-I.

    2013-05-01

    We present a self-interaction-free time-dependent density-functional theory (TDDFT) for the treatment of double-ionization processes of many-electron systems. The method is based on the extension of the Krieger-Li-Iafrate (KLI) treatment of the optimized effective potential (OEP) theory and the incorporation of an explicit self-interaction correction (SIC) term. In the framework of the time-dependent density functional theory, we have performed three-dimensional (3D) calculations of double ionization of He and Be atoms by intense near-infrared laser fields. We make use of the exchange-correlation potential with the integer discontinuity which improves the description of the double-ionization process. We found that a proper description of the double ionization requires the TDDFT exchange-correlation potential with the discontinuity with respect to the variation of the total particle number (TPN). The results for the intensity-dependent rates of double ionization of He and Be atoms are presented.

  13. Utilizing the PPET Mnemonic to Guide Classroom-Level PBIS for Students with or At Risk for EBD across Classroom Settings

    ERIC Educational Resources Information Center

    Hunter, William C.; Barton-Arwood, Sally; Jasper, Andrea; Murley, Renee; Clements, Tarol

    2017-01-01

    In this article, the authors discuss how the emphasis on classroom-level Positive Behavior Interventions and Supports strategies can establish a foundation for an efficient classroom management program and be utilized as a resource. The strategies described are physical classroom, procedures and rules, explicit timing, and transition (PETT…

  14. Approach to equilibrium of a quantum system and generalization of the Montroll-Shuler equation for vibrational relaxation of a molecular oscillator

    NASA Astrophysics Data System (ADS)

    Kenkre, V. M.; Chase, M.

    2017-08-01

    The approach to equilibrium of a quantum mechanical system in interaction with a bath is studied from a practical as well as a conceptual point of view. Explicit memory functions are derived for given models of bath couplings. If the system is a harmonic oscillator representing a molecule in interaction with a reservoir, the generalized master equation derived becomes an extension into the coherent domain of the well-known Montroll-Shuler equation for vibrational relaxation and unimolecular dissociation. A generalization of the Bethe-Teller result regarding energy relaxation is found for short times. The theory has obvious applications to relaxation dynamics at ultra-short times as in observations on the femtosecond time scale and to the investigation of quantum coherence at those short times. While vibrational relaxation in chemical physics is a primary target of the study, another system of interest in condensed matter physics, an electron or hole in a lattice subjected to a strong DC electric field that gives rise to well-known Wannier-Stark ladders, is naturally addressed with the theory. Specific system-bath interactions are explored to obtain explicit details of the dynamics. General phenomenological descriptions of the reservoir are considered rather than specific microscopic realizations.

  15. Quantum dynamics in continuum for proton transport II: Variational solvent-solute interface.

    PubMed

    Chen, Duan; Chen, Zhan; Wei, Guo-Wei

    2012-01-01

    Proton transport plays an important role in biological energy transduction and sensory systems. Therefore, it has attracted much attention in biological science and biomedical engineering in the past few decades. The present work proposes a multiscale/multiphysics model for the understanding of the molecular mechanism of proton transport in transmembrane proteins involving continuum, atomic, and quantum descriptions, assisted with the evolution, formation, and visualization of membrane channel surfaces. We describe proton dynamics quantum mechanically via a new density functional theory based on the Boltzmann statistics, while implicitly model numerous solvent molecules as a dielectric continuum to reduce the number of degrees of freedom. The density of all other ions in the solvent is assumed to obey the Boltzmann distribution in a dynamic manner. The impact of protein molecular structure and its charge polarization on the proton transport is considered explicitly at the atomic scale. A variational solute-solvent interface is designed to separate the explicit molecule and implicit solvent regions. We formulate a total free-energy functional to put proton kinetic and potential energies, the free energy of all other ions, and the polar and nonpolar energies of the whole system on an equal footing. The variational principle is employed to derive coupled governing equations for the proton transport system. Generalized Laplace-Beltrami equation, generalized Poisson-Boltzmann equation, and generalized Kohn-Sham equation are obtained from the present variational framework. The variational solvent-solute interface is generated and visualized to facilitate the multiscale discrete/continuum/quantum descriptions. Theoretical formulations for the proton density and conductance are constructed based on fundamental laws of physics. A number of mathematical algorithms, including the Dirichlet-to-Neumann mapping, matched interface and boundary method, Gummel iteration, and Krylov space techniques are utilized to implement the proposed model in a computationally efficient manner. The gramicidin A channel is used to validate the performance of the proposed proton transport model and demonstrate the efficiency of the proposed mathematical algorithms. The proton channel conductances are studied over a number of applied voltages and reference concentrations. A comparison with experimental data verifies the present model predictions and confirms the proposed model. Copyright © 2011 John Wiley & Sons, Ltd.

  16. Mean-Field Models of Structure and Dispersion of Polymer-nanoparticle Mixtures

    DTIC Science & Technology

    2010-07-29

    out of the seminal descriptions of the wetting and dewetting of polymer melts on polymer brushes advanced by Leibler and coworkers.118,119 Explicitly...using scaling ideas and strong segregation theory calculations they delineated the regions where the matrix polymer wets or dewets the brush. In the...Explicitly, when dewetting of the melt chains is expected ( dry brush). In other words, situations involving long matrix polymers and/or densely grafted

  17. A FORTRAN program for calculating nonlinear seismic ground response

    USGS Publications Warehouse

    Joyner, William B.

    1977-01-01

    The program described here was designed for calculating the nonlinear seismic response of a system of horizontal soil layers underlain by a semi-infinite elastic medium representing bedrock. Excitation is a vertically incident shear wave in the underlying medium. The nonlinear hysteretic behavior of the soil is represented by a model consisting of simple linear springs and Coulomb friction elements arranged as shown. A boundary condition is used which takes account of finite rigidity in the elastic substratum. The computations are performed by an explicit finite-difference scheme that proceeds step by step in space and time. A brief program description is provided here with instructions for preparing the input and a source listing. A more detailed discussion of the method is presented elsewhere as is the description of a different program employing implicit integration.

  18. A jellium model of a catalyst particle in carbon nanotube growth

    NASA Astrophysics Data System (ADS)

    Artyukhov, Vasilii I.; Liu, Mingjie; Penev, Evgeni S.; Yakobson, Boris I.

    2017-06-01

    We show how a jellium model can represent a catalyst particle within the density-functional theory based approaches to the growth mechanism of carbon nanotubes (CNTs). The advantage of jellium is an abridged, less computationally taxing description of the multi-atom metal particle, while at the same time in avoiding the uncertainty of selecting a particular atomic geometry of either a solid or ever-changing liquid catalyst particle. A careful choice of jellium sphere size and its electron density as a descriptive parameter allows one to calculate the CNT-metal interface energies close to explicit full atomistic models. Further, we show that using jellium permits computing and comparing the formation of topological defects (sole pentagons or heptagons, the culprits of growth termination) as well as pentagon-heptagon pairs 5|7 (known as chirality-switching dislocation).

  19. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-01-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  20. Studies of implicit and explicit solution techniques in transient thermal analysis of structures

    NASA Astrophysics Data System (ADS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1982-08-01

    Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.

  1. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.

  2. The Long and Short of It: Closing the Description-Experience "Gap" by Taking the Long-Run View

    ERIC Educational Resources Information Center

    Camilleri, Adrian R.; Newell, Ben R.

    2013-01-01

    Previous research has shown that many choice biases are attenuated when short-run decisions are reframed to the long run. However, this literature has been limited to description-based choice tasks in which possible outcomes and their probabilities are explicitly specified. A recent literature has emerged showing that many core results found using…

  3. Student Perceptions Regarding the Usefulness of Explicit Discussion of "Structure of the Observed Learning Outcome" Taxonomy

    ERIC Educational Resources Information Center

    Prakash, E. S.; Narayan, K. A.; Sethuraman, K. R.

    2010-01-01

    One method of grading responses of the descriptive type is by using Structure of Observed Learning Outcomes (SOLO) taxonomy. The basis of this study was the expectation that if students were oriented to SOLO taxonomy, it would provide them an opportunity to understand some of the factors that teachers consider while grading descriptive responses…

  4. Efficient protocols for Stirling heat engines at the micro-scale

    NASA Astrophysics Data System (ADS)

    Muratore-Ginanneschi, Paolo; Schwieger, Kay

    2015-10-01

    We investigate the thermodynamic efficiency of sub-micro-scale Stirling heat engines operating under the conditions described by overdamped stochastic thermodynamics. We show how to construct optimal protocols such that at maximum power the efficiency attains for constant isotropic mobility the universal law η=2 ηC/(4-ηC) , where ηC is the efficiency of an ideal Carnot cycle. We show that these protocols are specified by the solution of an optimal mass transport problem. Such solution can be determined explicitly using well-known Monge-Ampère-Kantorovich reconstruction algorithms. Furthermore, we show that the same law describes the efficiency of heat engines operating at maximum work over short time periods. Finally, we illustrate the straightforward extension of these results to cases when the mobility is anisotropic and temperature dependent.

  5. Rate-loss analysis of an efficient quantum repeater architecture

    NASA Astrophysics Data System (ADS)

    Guha, Saikat; Krovi, Hari; Fuchs, Christopher A.; Dutton, Zachary; Slater, Joshua A.; Simon, Christoph; Tittel, Wolfgang

    2015-08-01

    We analyze an entanglement-based quantum key distribution (QKD) architecture that uses a linear chain of quantum repeaters employing photon-pair sources, spectral-multiplexing, linear-optic Bell-state measurements, multimode quantum memories, and classical-only error correction. Assuming perfect sources, we find an exact expression for the secret-key rate, and an analytical description of how errors propagate through the repeater chain, as a function of various loss-and-noise parameters of the devices. We show via an explicit analytical calculation, which separately addresses the effects of the principle nonidealities, that this scheme achieves a secret-key rate that surpasses the Takeoka-Guha-Wilde bound—a recently found fundamental limit to the rate-vs-loss scaling achievable by any QKD protocol over a direct optical link—thereby providing one of the first rigorous proofs of the efficacy of a repeater protocol. We explicitly calculate the end-to-end shared noisy quantum state generated by the repeater chain, which could be useful for analyzing the performance of other non-QKD quantum protocols that require establishing long-distance entanglement. We evaluate that shared state's fidelity and the achievable entanglement-distillation rate, as a function of the number of repeater nodes, total range, and various loss-and-noise parameters of the system. We extend our theoretical analysis to encompass sources with nonzero two-pair-emission probability, using an efficient exact numerical evaluation of the quantum state propagation and measurements. We expect our results to spur formal rate-loss analysis of other repeater protocols and also to provide useful abstractions to seed analyses of quantum networks of complex topologies.

  6. Resolution-of-identity stochastic time-dependent configuration interaction for dissipative electron dynamics in strong fields.

    PubMed

    Klinkusch, Stefan; Tremblay, Jean Christophe

    2016-05-14

    In this contribution, we introduce a method for simulating dissipative, ultrafast many-electron dynamics in intense laser fields. The method is based on the norm-conserving stochastic unraveling of the dissipative Liouville-von Neumann equation in its Lindblad form. The N-electron wave functions sampling the density matrix are represented in the basis of singly excited configuration state functions. The interaction with an external laser field is treated variationally and the response of the electronic density is included to all orders in this basis. The coupling to an external environment is included via relaxation operators inducing transition between the configuration state functions. Single electron ionization is represented by irreversible transition operators from the ionizing states to an auxiliary continuum state. The method finds its efficiency in the representation of the operators in the interaction picture, where the resolution-of-identity is used to reduce the size of the Hamiltonian eigenstate basis. The zeroth-order eigenstates can be obtained either at the configuration interaction singles level or from a time-dependent density functional theory reference calculation. The latter offers an alternative to explicitly time-dependent density functional theory which has the advantage of remaining strictly valid for strong field excitations while improving the description of the correlation as compared to configuration interaction singles. The method is tested on a well-characterized toy system, the excitation of the low-lying charge transfer state in LiCN.

  7. Resolution-of-identity stochastic time-dependent configuration interaction for dissipative electron dynamics in strong fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klinkusch, Stefan; Tremblay, Jean Christophe

    In this contribution, we introduce a method for simulating dissipative, ultrafast many-electron dynamics in intense laser fields. The method is based on the norm-conserving stochastic unraveling of the dissipative Liouville-von Neumann equation in its Lindblad form. The N-electron wave functions sampling the density matrix are represented in the basis of singly excited configuration state functions. The interaction with an external laser field is treated variationally and the response of the electronic density is included to all orders in this basis. The coupling to an external environment is included via relaxation operators inducing transition between the configuration state functions. Single electronmore » ionization is represented by irreversible transition operators from the ionizing states to an auxiliary continuum state. The method finds its efficiency in the representation of the operators in the interaction picture, where the resolution-of-identity is used to reduce the size of the Hamiltonian eigenstate basis. The zeroth-order eigenstates can be obtained either at the configuration interaction singles level or from a time-dependent density functional theory reference calculation. The latter offers an alternative to explicitly time-dependent density functional theory which has the advantage of remaining strictly valid for strong field excitations while improving the description of the correlation as compared to configuration interaction singles. The method is tested on a well-characterized toy system, the excitation of the low-lying charge transfer state in LiCN.« less

  8. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    NASA Astrophysics Data System (ADS)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  9. Variable selection in discrete survival models including heterogeneity.

    PubMed

    Groll, Andreas; Tutz, Gerhard

    2017-04-01

    Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.

  10. Multigrid Acceleration of Time-Accurate DNS of Compressible Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Broeze, Jan; Geurts, Bernard; Kuerten, Hans; Streng, Martin

    1996-01-01

    An efficient scheme for the direct numerical simulation of 3D transitional and developed turbulent flow is presented. Explicit and implicit time integration schemes for the compressible Navier-Stokes equations are compared. The nonlinear system resulting from the implicit time discretization is solved with an iterative method and accelerated by the application of a multigrid technique. Since we use central spatial discretizations and no artificial dissipation is added to the equations, the smoothing method is less effective than in the more traditional use of multigrid in steady-state calculations. Therefore, a special prolongation method is needed in order to obtain an effective multigrid method. This simulation scheme was studied in detail for compressible flow over a flat plate. In the laminar regime and in the first stages of turbulent flow the implicit method provides a speed-up of a factor 2 relative to the explicit method on a relatively coarse grid. At increased resolution this speed-up is enhanced correspondingly.

  11. Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations

    USGS Publications Warehouse

    Casulli, V.; Cheng, R.T.

    1990-01-01

    In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.

  12. Estimating the number of people in crowded scenes

    NASA Astrophysics Data System (ADS)

    Kim, Minjin; Kim, Wonjun; Kim, Changick

    2011-01-01

    This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.

  13. Functional thermo-dynamics: a generalization of dynamic density functional theory to non-isothermal situations.

    PubMed

    Anero, Jesús G; Español, Pep; Tarazona, Pedro

    2013-07-21

    We present a generalization of Density Functional Theory (DFT) to non-equilibrium non-isothermal situations. By using the original approach set forth by Gibbs in his consideration of Macroscopic Thermodynamics (MT), we consider a Functional Thermo-Dynamics (FTD) description based on the density field and the energy density field. A crucial ingredient of the theory is an entropy functional, which is a concave functional. Therefore, there is a one to one connection between the density and energy fields with the conjugate thermodynamic fields. The connection between the three levels of description (MT, DFT, FTD) is clarified through a bridge theorem that relates the entropy of different levels of description and that constitutes a generalization of Mermin's theorem to arbitrary levels of description whose relevant variables are connected linearly. Although the FTD level of description does not provide any new information about averages and correlations at equilibrium, it is a crucial ingredient for the dynamics in non-equilibrium states. We obtain with the technique of projection operators the set of dynamic equations that describe the evolution of the density and energy density fields from an initial non-equilibrium state towards equilibrium. These equations generalize time dependent density functional theory to non-isothermal situations. We also present an explicit model for the entropy functional for hard spheres.

  14. Spatially explicit shallow landslide susceptibility mapping over large areas

    USGS Publications Warehouse

    Bellugi, Dino; Dietrich, William E.; Stock, Jonathan D.; McKean, Jim; Kazian, Brian; Hargrove, Paul

    2011-01-01

    Recent advances in downscaling climate model precipitation predictions now yield spatially explicit patterns of rainfall that could be used to estimate shallow landslide susceptibility over large areas. In California, the United States Geological Survey is exploring community emergency response to the possible effects of a very large simulated storm event and to do so it has generated downscaled precipitation maps for the storm. To predict the corresponding pattern of shallow landslide susceptibility across the state, we have used the model Shalstab (a coupled steady state runoff and infinite slope stability model) which susceptibility spatially explicit estimates of relative potential instability. Such slope stability models that include the effects of subsurface runoff on potentially destabilizing pore pressure evolution require water routing and hence the definition of upslope drainage area to each potential cell. To calculate drainage area efficiently over a large area we developed a parallel framework to scale-up Shalstab and specifically introduce a new efficient parallel drainage area algorithm which produces seamless results. The single seamless shallow landslide susceptibility map for all of California was accomplished in a short run time, and indicates that much larger areas can be efficiently modelled. As landslide maps generally over predict the extent of instability for any given storm. Local empirical data on the fraction of predicted unstable cells that failed for observed rainfall intensity can be used to specify the likely extent of hazard for a given storm. This suggests that campaigns to collect local precipitation data and detailed shallow landslide location maps after major storms could be used to calibrate models and improve their use in hazard assessment for individual storms.

  15. The Effect of Implicit and Explicit Feedback: A Study on the Acquisition of Mandarin Classifiers by Chinese Heritage and Non-Heritage Language Learners

    ERIC Educational Resources Information Center

    Han, Ye

    2010-01-01

    Previous studies revealed mixed results in terms of the relative effects of implicit and explicit feedback: some found that explicit feedback worked more efficiently than implicit feedback; others found no difference between the two feedback types. These contrasting results called for further investigations into this issue, particularly examining…

  16. Multiscale time-dependent density functional theory: Demonstration for plasmons.

    PubMed

    Jiang, Jiajian; Abi Mansour, Andrew; Ortoleva, Peter J

    2017-08-07

    Plasmon properties are of significant interest in pure and applied nanoscience. While time-dependent density functional theory (TDDFT) can be used to study plasmons, it becomes impractical for elucidating the effect of size, geometric arrangement, and dimensionality in complex nanosystems. In this study, a new multiscale formalism that addresses this challenge is proposed. This formalism is based on Trotter factorization and the explicit introduction of a coarse-grained (CG) structure function constructed as the Weierstrass transform of the electron wavefunction. This CG structure function is shown to vary on a time scale much longer than that of the latter. A multiscale propagator that coevolves both the CG structure function and the electron wavefunction is shown to bring substantial efficiency over classical propagators used in TDDFT. This efficiency follows from the enhanced numerical stability of the multiscale method and the consequence of larger time steps that can be used in a discrete time evolution. The multiscale algorithm is demonstrated for plasmons in a group of interacting sodium nanoparticles (15-240 atoms), and it achieves improved efficiency over TDDFT without significant loss of accuracy or space-time resolution.

  17. Delivering Faster Congestion Feedback with the Mark-Front Strategy

    NASA Technical Reports Server (NTRS)

    Liu, Chunlei; Jain, Raj

    2001-01-01

    Computer networks use congestion feedback from the routers and destinations to control the transmission load. Delivering timely congestion feedback is essential to the performance of networks. Reaction to the congestion can be more effective if faster feedback is provided. Current TCP/IP networks use timeout, duplicate Acknowledgement Packets (ACKs) and explicit congestion notification (ECN) to deliver the congestion feedback, each provides a faster feedback than the previous method. In this paper, we propose a markfront strategy that delivers an even faster congestion feedback. With analytical and simulation results, we show that mark-front strategy reduces buffer size requirement, improves link efficiency and provides better fairness among users. Keywords: Explicit Congestion Notification, mark-front, congestion control, buffer size requirement, fairness.

  18. Implicit Kalman filtering

    NASA Technical Reports Server (NTRS)

    Skliar, M.; Ramirez, W. F.

    1997-01-01

    For an implicitly defined discrete system, a new algorithm for Kalman filtering is developed and an efficient numerical implementation scheme is proposed. Unlike the traditional explicit approach, the implicit filter can be readily applied to ill-conditioned systems and allows for generalization to descriptor systems. The implementation of the implicit filter depends on the solution of the congruence matrix equation (A1)(Px)(AT1) = Py. We develop a general iterative method for the solution of this equation, and prove necessary and sufficient conditions for convergence. It is shown that when the system matrices of an implicit system are sparse, the implicit Kalman filter requires significantly less computer time and storage to implement as compared to the traditional explicit Kalman filter. Simulation results are presented to illustrate and substantiate the theoretical developments.

  19. A comparison of three-dimensional nonequilibrium solution algorithms applied to hypersonic flows with stiff chemical source terms

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Venkatapathy, Ethiraj

    1993-01-01

    Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  20. Spatially explicit modelling of cholera epidemics

    NASA Astrophysics Data System (ADS)

    Finger, F.; Bertuzzo, E.; Mari, L.; Knox, A. C.; Gatto, M.; Rinaldo, A.

    2013-12-01

    Epidemiological models can provide crucial understanding about the dynamics of infectious diseases. Possible applications range from real-time forecasting and allocation of health care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. We apply a spatially explicit model to the cholera epidemic that struck Haiti in October 2010 and is still ongoing. The dynamics of susceptibles as well as symptomatic and asymptomatic infectives are modelled at the scale of local human communities. Dissemination of Vibrio cholerae through hydrological transport and human mobility along the road network is explicitly taken into account, as well as the effect of rainfall as a driver of increasing disease incidence. The model is calibrated using a dataset of reported cholera cases. We further model the long term impact of several types of interventions on the disease dynamics by varying parameters appropriately. Key epidemiological mechanisms and parameters which affect the efficiency of treatments such as antibiotics are identified. Our results lead to conclusions about the influence of different intervention strategies on the overall epidemiological dynamics.

  1. Multigrid calculation of three-dimensional turbomachinery flows

    NASA Technical Reports Server (NTRS)

    Caughey, David A.

    1989-01-01

    Research was performed in the general area of computational aerodynamics, with particular emphasis on the development of efficient techniques for the solution of the Euler and Navier-Stokes equations for transonic flows through the complex blade passages associated with turbomachines. In particular, multigrid methods were developed, using both explicit and implicit time-stepping schemes as smoothing algorithms. The specific accomplishments of the research have included: (1) the development of an explicit multigrid method to solve the Euler equations for three-dimensional turbomachinery flows based upon the multigrid implementation of Jameson's explicit Runge-Kutta scheme (Jameson 1983); (2) the development of an implicit multigrid scheme for the three-dimensional Euler equations based upon lower-upper factorization; (3) the development of a multigrid scheme using a diagonalized alternating direction implicit (ADI) algorithm; (4) the extension of the diagonalized ADI multigrid method to solve the Euler equations of inviscid flow for three-dimensional turbomachinery flows; and also (5) the extension of the diagonalized ADI multigrid scheme to solve the Reynolds-averaged Navier-Stokes equations for two-dimensional turbomachinery flows.

  2. Time-fractional Cahn-Allen and time-fractional Klein-Gordon equations: Lie symmetry analysis, explicit solutions and convergence analysis

    NASA Astrophysics Data System (ADS)

    Inc, Mustafa; Yusuf, Abdullahi; Isa Aliyu, Aliyu; Baleanu, Dumitru

    2018-03-01

    This research analyzes the symmetry analysis, explicit solutions and convergence analysis to the time fractional Cahn-Allen (CA) and time-fractional Klein-Gordon (KG) equations with Riemann-Liouville (RL) derivative. The time fractional CA and time fractional KG are reduced to respective nonlinear ordinary differential equation of fractional order. We solve the reduced fractional ODEs using an explicit power series method. The convergence analysis for the obtained explicit solutions are investigated. Some figures for the obtained explicit solutions are also presented.

  3. Building Science-Relevant Literacy with Technical Writing in High School

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Girill, T R

    2006-06-02

    By drawing on the in-class work of an on-going literacy outreach project, this paper explains how well-chosen technical writing activities can earn time in high-school science courses by enabling underperforming students (including ESL students) to learn science more effectively. We adapted basic research-based text-design and usability techniques into age-appropriate exercises and cases using the cognitive apprenticeship approach. This enabled high-school students, aided by explicit guidelines, to build their cognitive maturity, learn how to craft good instructions and descriptions, and apply those skills to better note taking and technical talks in their science classes.

  4. Multiscale fracture network characterization and impact on flow: A case study on the Latemar carbonate platform

    NASA Astrophysics Data System (ADS)

    Hardebol, N. J.; Maier, C.; Nick, H.; Geiger, S.; Bertotti, G.; Boro, H.

    2015-12-01

    A fracture network arrangement is quantified across an isolated carbonate platform from outcrop and aerial imagery to address its impact on fluid flow. The network is described in terms of fracture density, orientation, and length distribution parameters. Of particular interest is the role of fracture cross connections and abutments on the effective permeability. Hence, the flow simulations explicitly account for network topology by adopting Discrete-Fracture-and-Matrix description. The interior of the Latemar carbonate platform (Dolomites, Italy) is taken as outcrop analogue for subsurface reservoirs of isolated carbonate build-ups that exhibit a fracture-dominated permeability. New is our dual strategy to describe the fracture network both as deterministic- and stochastic-based inputs for flow simulations. The fracture geometries are captured explicitly and form a multiscale data set by integration of interpretations from outcrops, airborne imagery, and lidar. The deterministic network descriptions form the basis for descriptive rules that are diagnostic of the complex natural fracture arrangement. The fracture networks exhibit a variable degree of multitier hierarchies with smaller-sized fractures abutting against larger fractures under both right and oblique angles. The influence of network topology on connectivity is quantified using Discrete-Fracture-Single phase fluid flow simulations. The simulation results show that the effective permeability for the fracture and matrix ensemble can be 50 to 400 times higher than the matrix permeability of 1.0 · 10-14 m2. The permeability enhancement is strongly controlled by the connectivity of the fracture network. Therefore, the degree of intersecting and abutting fractures should be captured from outcrops with accuracy to be of value as analogue.

  5. Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Ash, Robert L.

    1992-01-01

    The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.

  6. 'Chipping in': clinical psychologists' descriptions of their use of formulation in multidisciplinary team working.

    PubMed

    Christofides, Stella; Johnstone, Lucy; Musa, Meyrem

    2012-12-01

    To investigate clinical psychologists' accounts of their use of psychological case formulation in multidisciplinary teamwork. A qualitative study using inductive thematic analysis. Ten clinical psychologists working in community and inpatient adult mental health services who identified themselves as using formulation in their multidisciplinary team work participated in semi-structured interviews. Psychological hypotheses were described as shared mostly through informal means such as chipping in ideas during a team discussion rather than through explicit means such as staff training or case presentations that usually only took place once participants had spent time developing their role within the team. Service context and staff's prior experience were also factors in how explicitly formulation was discussed. Participants reported that they believed that this way of working, although often not formally recognized, was valuable and improved the quality of clinical services provided. More investigation into this under-researched but important area of clinical practice is needed, in order to share ideas and support good practice. ©2011 The British Psychological Society.

  7. Nonlinear intrinsic variables and state reconstruction in multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dsilva, Carmeline J., E-mail: cdsilva@princeton.edu; Talmon, Ronen, E-mail: ronen.talmon@yale.edu; Coifman, Ronald R., E-mail: coifman@math.yale.edu

    2013-11-14

    Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certainmore » simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.« less

  8. Nonlinear intrinsic variables and state reconstruction in multiscale simulations

    NASA Astrophysics Data System (ADS)

    Dsilva, Carmeline J.; Talmon, Ronen; Rabin, Neta; Coifman, Ronald R.; Kevrekidis, Ioannis G.

    2013-11-01

    Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certain simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.

  9. A General Reversible Hereditary Constitutive Model. Part 1; Theoretical Developments

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Arnold, S. M.

    1997-01-01

    Using an internal-variable formalism as a starting point, we describe the viscoelastic extension of a previously-developed viscoplasticity formulation of the complete potential structure type. It is mainly motivated by experimental evidence for the presence of rate/time effects in the so-called quasilinear, reversible, material response range. Several possible generalizations are described, in the general format of hereditary-integral representations for non-equilibrium, stress-type, state variables, both for isotropic as well as anisotropic materials. In particular, thorough discussions are given on the important issues of thermodynamic admissibility requirements for such general descriptions, resulting in a set of explicit mathematical constraints on the associated kernel (relaxation and creep compliance) functions. In addition, a number of explicit, integrated forms are derived, under stress and strain control to facilitate the parametric and qualitative response characteristic studies reported here, as well as to help identify critical factors in the actual experimental characterizations from test data that will be reported in Part II.

  10. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  11. Irreducible Representations of Oscillatory and Swirling Flows in Active Soft Matter

    NASA Astrophysics Data System (ADS)

    Ghose, Somdeb; Adhikari, R.

    2014-03-01

    Recent experiments imaging fluid flow around swimming microorganisms have revealed complex time-dependent velocity fields that differ qualitatively from the stresslet flow commonly employed in theoretical descriptions of active matter. Here we obtain the most general flow around a finite sized active particle by expanding the surface stress in irreducible Cartesian tensors. This expansion, whose first term is the stresslet, must include, respectively, third-rank polar and axial tensors to minimally capture crucial features of the active oscillatory flow around translating Chlamydomonas and the active swirling flow around rotating Volvox. The representation provides explicit expressions for the irreducible symmetric, antisymmetric, and isotropic parts of the continuum active stress. Antisymmetric active stresses do not conserve orbital angular momentum and our work thus shows that spin angular momentum is necessary to restore angular momentum conservation in continuum hydrodynamic descriptions of active soft matter.

  12. An exponential time-integrator scheme for steady and unsteady inviscid flows

    NASA Astrophysics Data System (ADS)

    Li, Shu-Jie; Luo, Li-Shi; Wang, Z. J.; Ju, Lili

    2018-07-01

    An exponential time-integrator scheme of second-order accuracy based on the predictor-corrector methodology, denoted PCEXP, is developed to solve multi-dimensional nonlinear partial differential equations pertaining to fluid dynamics. The effective and efficient implementation of PCEXP is realized by means of the Krylov method. The linear stability and truncation error are analyzed through a one-dimensional model equation. The proposed PCEXP scheme is applied to the Euler equations discretized with a discontinuous Galerkin method in both two and three dimensions. The effectiveness and efficiency of the PCEXP scheme are demonstrated for both steady and unsteady inviscid flows. The accuracy and efficiency of the PCEXP scheme are verified and validated through comparisons with the explicit third-order total variation diminishing Runge-Kutta scheme (TVDRK3), the implicit backward Euler (BE) and the implicit second-order backward difference formula (BDF2). For unsteady flows, the PCEXP scheme generates a temporal error much smaller than the BDF2 scheme does, while maintaining the expected acceleration at the same time. Moreover, the PCEXP scheme is also shown to achieve the computational efficiency comparable to the implicit schemes for steady flows.

  13. The Development, Description and Appraisal of an Emergent Multimethod Research Design to Study Workforce Changes in Integrated Care Interventions.

    PubMed

    Busetto, Loraine; Luijkx, Katrien; Calciolari, Stefano; González-Ortiz, Laura G; Vrijhoef, Hubertus J M

    2017-03-08

    In this paper, we provide a detailed and explicit description of the processes and decisions underlying and shaping the emergent multimethod research design of our study on workforce changes in integrated chronic care. The study was originally planned as mixed method research consisting of a preliminary literature review and quantitative check of these findings via a Delphi panel. However, when the findings of the literature review were not appropriate for quantitative confirmation, we chose to continue our qualitative exploration of the topic via qualitative questionnaires and secondary analysis of two best practice case reports. The resulting research design is schematically described as an emergent and interactive multimethod design with multiphase combination timing. In doing so, we provide other researchers with a set of theory- and experience-based options to develop their own multimethod research and provide an example for more detailed and structured reporting of emergent designs. We argue that the terminology developed for the description of mixed methods designs should also be used for multimethod designs such as the one presented here.

  14. Efficient quantum walk on a quantum processor

    PubMed Central

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.

    2016-01-01

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471

  15. The Effect of Training Data Set Composition on the Performance of a Neural Image Caption Generator

    DTIC Science & Technology

    2017-09-01

    objects was compared using the Metric for Evaluation of Translation with Explicit Ordering (METEOR) and Consensus-Based Image Description Evaluation...using automated scoring systems. Many such systems exist, including Bilingual Evaluation Understudy (BLEU), Consensus-Based Image Description Evaluation...shown to be essential to automated scoring, which correlates highly with human precision.5 CIDEr uses a system of consensus among the captions and

  16. DPADL: An Action Language for Data Processing Domains

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper presents DPADL (Data Processing Action Description Language), a language for describing planning domains that involve data processing. DPADL is a declarative object-oriented language that supports constraints and embedded Java code, object creation and copying, explicit inputs and outputs for actions, and metadata descriptions of existing and desired data. DPADL is supported by the IMAGEbot system, which will provide automation for an ecosystem forecasting system called TOPS.

  17. Mean-variance portfolio selection for defined-contribution pension funds with stochastic salary.

    PubMed

    Zhang, Chubing

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier.

  18. A comparison of the Method of Lines to finite difference techniques in solving time-dependent partial differential equations. [with applications to Burger equation and stream function-vorticity problem

    NASA Technical Reports Server (NTRS)

    Kurtz, L. A.; Smith, R. E.; Parks, C. L.; Boney, L. R.

    1978-01-01

    Steady state solutions to two time dependent partial differential systems have been obtained by the Method of Lines (MOL) and compared to those obtained by efficient standard finite difference methods: (1) Burger's equation over a finite space domain by a forward time central space explicit method, and (2) the stream function - vorticity form of viscous incompressible fluid flow in a square cavity by an alternating direction implicit (ADI) method. The standard techniques were far more computationally efficient when applicable. In the second example, converged solutions at very high Reynolds numbers were obtained by MOL, whereas solution by ADI was either unattainable or impractical. With regard to 'set up' time, solution by MOL is an attractive alternative to techniques with complicated algorithms, as much of the programming difficulty is eliminated.

  19. Achieving Rigorous Accelerated Conformational Sampling in Explicit Solvent.

    PubMed

    Doshi, Urmi; Hamelberg, Donald

    2014-04-03

    Molecular dynamics simulations can provide valuable atomistic insights into biomolecular function. However, the accuracy of molecular simulations on general-purpose computers depends on the time scale of the events of interest. Advanced simulation methods, such as accelerated molecular dynamics, have shown tremendous promise in sampling the conformational dynamics of biomolecules, where standard molecular dynamics simulations are nonergodic. Here we present a sampling method based on accelerated molecular dynamics in which rotatable dihedral angles and nonbonded interactions are boosted separately. This method (RaMD-db) is a different implementation of the dual-boost accelerated molecular dynamics, introduced earlier. The advantage is that this method speeds up sampling of the conformational space of biomolecules in explicit solvent, as the degrees of freedom most relevant for conformational transitions are accelerated. We tested RaMD-db on one of the most difficult sampling problems - protein folding. Starting from fully extended polypeptide chains, two fast folding α-helical proteins (Trpcage and the double mutant of C-terminal fragment of Villin headpiece) and a designed β-hairpin (Chignolin) were completely folded to their native structures in very short simulation time. Multiple folding/unfolding transitions could be observed in a single trajectory. Our results show that RaMD-db is a promisingly fast and efficient sampling method for conformational transitions in explicit solvent. RaMD-db thus opens new avenues for understanding biomolecular self-assembly and functional dynamics occurring on long time and length scales.

  20. Probing the free energy landscape of the FBP28WW domain using multiple techniques.

    PubMed

    Periole, Xavier; Allen, Lucy R; Tamiola, Kamil; Mark, Alan E; Paci, Emanuele

    2009-05-01

    The free-energy landscape of a small protein, the FBP 28 WW domain, has been explored using molecular dynamics (MD) simulations with alternative descriptions of the molecule. The molecular models used range from coarse-grained to all-atom with either an implicit or explicit treatment of the solvent. Sampling of conformation space was performed using both conventional and temperature-replica exchange MD simulations. Experimental chemical shifts and NOEs were used to validate the simulations, and experimental phi values both for validation and as restraints. This combination of different approaches has provided insight into the free energy landscape and barriers encountered by the protein during folding and enabled the characterization of native, denatured and transition states which are compatible with the available experimental data. All the molecular models used stabilize well defined native and denatured basins; however, the degree of agreement with the available experimental data varies. While the most detailed, explicit solvent model predicts the data reasonably accurately, it does not fold despite a simulation time 10 times that of the experimental folding time. The less detailed models performed poorly relative to the explicit solvent model: an implicit solvent model stabilizes a ground state which differs from the experimental native state, and a structure-based model underestimates the size of the barrier between the two states. The use of experimental phi values both as restraints, and to extract structures from unfolding simulations, result in conformations which, although not necessarily true transition states, appear to share the geometrical characteristics of transition state structures. In addition to characterizing the native, transition and denatured states of this particular system in this work, the advantages and limitations of using varying levels of representation are discussed. 2008 Wiley Periodicals, Inc.

  1. Event-by-Event Study of Space-Time Dynamics in Flux-Tube Fragmentation

    DOE PAGES

    Wong, Cheuk-Yin

    2017-05-25

    In the semi-classical description of the flux-tube fragmentation process for hadron production and hadronization in high-energymore » $e^+e^-$ annihilations and $pp$ collisions, the rapidity-space-time ordering and the local conservation laws of charge, flavor, and momentum provide a set of powerful tools that may allow the reconstruction of the space-time dynamics of quarks and mesons in exclusive measurements of produced hadrons, on an event-by-event basis. We propose procedures to reconstruct the space-time dynamics from event-by-event exclusive hadron data to exhibit explicitly the ordered chain of hadrons produced in a flux tube fragmentation. As a supplementary tool, we infer the average space-time coordinates of the $q$-$$\\bar q$$ pair production vertices from the $$\\pi^-$$ rapidity distribution data obtained by the NA61/SHINE Collaboration in $pp$ collisions at $$\\sqrt{s}$$ = 6.3 to 17.3 GeV.« less

  2. Event-by-Event Study of Space-Time Dynamics in Flux-Tube Fragmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Cheuk-Yin

    In the semi-classical description of the flux-tube fragmentation process for hadron production and hadronization in high-energymore » $e^+e^-$ annihilations and $pp$ collisions, the rapidity-space-time ordering and the local conservation laws of charge, flavor, and momentum provide a set of powerful tools that may allow the reconstruction of the space-time dynamics of quarks and mesons in exclusive measurements of produced hadrons, on an event-by-event basis. We propose procedures to reconstruct the space-time dynamics from event-by-event exclusive hadron data to exhibit explicitly the ordered chain of hadrons produced in a flux tube fragmentation. As a supplementary tool, we infer the average space-time coordinates of the $q$-$$\\bar q$$ pair production vertices from the $$\\pi^-$$ rapidity distribution data obtained by the NA61/SHINE Collaboration in $pp$ collisions at $$\\sqrt{s}$$ = 6.3 to 17.3 GeV.« less

  3. Flexible explicit but rigid implicit learning in a visuomotor adaptation task

    PubMed Central

    Bond, Krista M.

    2015-01-01

    There is mounting evidence for the idea that performance in a visuomotor rotation task can be supported by both implicit and explicit forms of learning. The implicit component of learning has been well characterized in previous experiments and is thought to arise from the adaptation of an internal model driven by sensorimotor prediction errors. However, the role of explicit learning is less clear, and previous investigations aimed at characterizing the explicit component have relied on indirect measures such as dual-task manipulations, posttests, and descriptive computational models. To address this problem, we developed a new method for directly assaying explicit learning by having participants verbally report their intended aiming direction on each trial. While our previous research employing this method has demonstrated the possibility of measuring explicit learning over the course of training, it was only tested over a limited scope of manipulations common to visuomotor rotation tasks. In the present study, we sought to better characterize explicit and implicit learning over a wider range of task conditions. We tested how explicit and implicit learning change as a function of the specific visual landmarks used to probe explicit learning, the number of training targets, and the size of the rotation. We found that explicit learning was remarkably flexible, responding appropriately to task demands. In contrast, implicit learning was strikingly rigid, with each task condition producing a similar degree of implicit learning. These results suggest that explicit learning is a fundamental component of motor learning and has been overlooked or conflated in previous visuomotor tasks. PMID:25855690

  4. Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD

    NASA Astrophysics Data System (ADS)

    Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.

    2017-12-01

    We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.

  5. Keeping speed and distance for aligned motion

    NASA Astrophysics Data System (ADS)

    Farkas, Illés J.; Kun, Jeromos; Jin, Yi; He, Gaoqi; Xu, Mingliang

    2015-01-01

    The cohesive collective motion (flocking, swarming) of autonomous agents is ubiquitously observed and exploited in both natural and man-made settings, thus, minimal models for its description are essential. In a model with continuous space and time we find that if two particles arrive symmetrically in a plane at a large angle, then (i) radial repulsion and (ii) linear self-propelling toward a fixed preferred speed are sufficient for them to depart at a smaller angle. For this local gain of momentum explicit velocity alignment is not necessary, nor are adhesion or attraction, inelasticity or anisotropy of the particles, or nonlinear drag. With many particles obeying these microscopic rules of motion we find that their spatial confinement to a square with periodic boundaries (which is an indirect form of attraction) leads to stable macroscopic ordering. As a function of the strength of added noise we see—at finite system sizes—a critical slowing down close to the order-disorder boundary and a discontinuous transition. After varying the density of particles at constant system size and varying the size of the system with constant particle density we predict that in the infinite system size (or density) limit the hysteresis loop disappears and the transition becomes continuous. We note that animals, humans, drones, etc., tend to move asynchronously and are often more responsive to motion than positions. Thus, for them velocity-based continuous models can provide higher precision than coordinate-based models. An additional characteristic and realistic feature of the model is that convergence to the ordered state is fastest at a finite density, which is in contrast to models applying (discontinuous) explicit velocity alignments and discretized time. To summarize, we find that the investigated model can provide a minimal description of flocking.

  6. Keeping speed and distance for aligned motion.

    PubMed

    Farkas, Illés J; Kun, Jeromos; Jin, Yi; He, Gaoqi; Xu, Mingliang

    2015-01-01

    The cohesive collective motion (flocking, swarming) of autonomous agents is ubiquitously observed and exploited in both natural and man-made settings, thus, minimal models for its description are essential. In a model with continuous space and time we find that if two particles arrive symmetrically in a plane at a large angle, then (i) radial repulsion and (ii) linear self-propelling toward a fixed preferred speed are sufficient for them to depart at a smaller angle. For this local gain of momentum explicit velocity alignment is not necessary, nor are adhesion or attraction, inelasticity or anisotropy of the particles, or nonlinear drag. With many particles obeying these microscopic rules of motion we find that their spatial confinement to a square with periodic boundaries (which is an indirect form of attraction) leads to stable macroscopic ordering. As a function of the strength of added noise we see--at finite system sizes--a critical slowing down close to the order-disorder boundary and a discontinuous transition. After varying the density of particles at constant system size and varying the size of the system with constant particle density we predict that in the infinite system size (or density) limit the hysteresis loop disappears and the transition becomes continuous. We note that animals, humans, drones, etc., tend to move asynchronously and are often more responsive to motion than positions. Thus, for them velocity-based continuous models can provide higher precision than coordinate-based models. An additional characteristic and realistic feature of the model is that convergence to the ordered state is fastest at a finite density, which is in contrast to models applying (discontinuous) explicit velocity alignments and discretized time. To summarize, we find that the investigated model can provide a minimal description of flocking.

  7. Aeroelastic Model Structure Computation for Envelope Expansion

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2007-01-01

    Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear aeroelastic systems. The LASSO minimises the residual sum of squares by the addition of an l(sub 1) penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 Active Aeroelastic Wing using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.

  8. Making the most of sparse clinical data by using a predictive-model-based analysis, illustrated with a stavudine pharmacokinetic study.

    PubMed

    Zhang, L; Price, R; Aweeka, F; Bellibas, S E; Sheiner, L B

    2001-02-01

    A small-scale clinical investigation was done to quantify the penetration of stavudine (D4T) into cerebrospinal fluid (CSF). A model-based analysis estimates the steady-state ratio of AUCs of CSF and plasma concentrations (R(AUC)) to be 0.270, and the mean residence time of drug in the CSF to be 7.04 h. The analysis illustrates the advantages of a causal (scientific, predictive) model-based approach to analysis over a noncausal (empirical, descriptive) approach when the data, as here, demonstrate certain problematic features commonly encountered in clinical data, namely (i) few subjects, (ii) sparse sampling, (iii) repeated measures, (iv) imbalance, and (v) individual design variation. These features generally require special attention in data analysis. The causal-model-based analysis deals with features (i) and (ii), both of which reduce efficiency, by combining data from different studies and adding subject-matter prior information. It deals with features (iii)--(v), all of which prevent 'averaging' individual data points directly, first, by adjusting in the model for interindividual data differences due to design differences, secondly, by explicitly differentiating between interpatient, interoccasion, and measurement error variation, and lastly, by defining a scientifically meaningful estimand (R(AUC)) that is independent of design.

  9. Three-dimensional inverse modelling of damped elastic wave propagation in the Fourier domain

    NASA Astrophysics Data System (ADS)

    Petrov, Petr V.; Newman, Gregory A.

    2014-09-01

    3-D full waveform inversion (FWI) of seismic wavefields is routinely implemented with explicit time-stepping simulators. A clear advantage of explicit time stepping is the avoidance of solving large-scale implicit linear systems that arise with frequency domain formulations. However, FWI using explicit time stepping may require a very fine time step and (as a consequence) significant computational resources and run times. If the computational challenges of wavefield simulation can be effectively handled, an FWI scheme implemented within the frequency domain utilizing only a few frequencies, offers a cost effective alternative to FWI in the time domain. We have therefore implemented a 3-D FWI scheme for elastic wave propagation in the Fourier domain. To overcome the computational bottleneck in wavefield simulation, we have exploited an efficient Krylov iterative solver for the elastic wave equations approximated with second and fourth order finite differences. The solver does not exploit multilevel preconditioning for wavefield simulation, but is coupled efficiently to the inversion iteration workflow to reduce computational cost. The workflow is best described as a series of sequential inversion experiments, where in the case of seismic reflection acquisition geometries, the data has been laddered such that we first image highly damped data, followed by data where damping is systemically reduced. The key to our modelling approach is its ability to take advantage of solver efficiency when the elastic wavefields are damped. As the inversion experiment progresses, damping is significantly reduced, effectively simulating non-damped wavefields in the Fourier domain. While the cost of the forward simulation increases as damping is reduced, this is counterbalanced by the cost of the outer inversion iteration, which is reduced because of a better starting model obtained from the larger damped wavefield used in the previous inversion experiment. For cross-well data, it is also possible to launch a successful inversion experiment without laddering the damping constants. With this type of acquisition geometry, the solver is still quite effective using a small fixed damping constant. To avoid cycle skipping, we also employ a multiscale imaging approach, in which frequency content of the data is also laddered (with the data now including both reflection and cross-well data acquisition geometries). Thus the inversion process is launched using low frequency data to first recover the long spatial wavelength of the image. With this image as a new starting model, adding higher frequency data refines and enhances the resolution of the image. FWI using laddered frequencies with an efficient damping schemed enables reconstructing elastic attributes of the subsurface at a resolution that approaches half the smallest wavelength utilized to image the subsurface. We show the possibility of effectively carrying out such reconstructions using two to six frequencies, depending upon the application. Using the proposed FWI scheme, massively parallel computing resources are essential for reasonable execution times.

  10. Navier-Stokes calculations for DFVLR F5-wing in wind tunnel using Runge-Kutta time-stepping scheme

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.; Wedan, B. W.

    1988-01-01

    A three-dimensional Navier-Stokes code using an explicit multistage Runge-Kutta type of time-stepping scheme is used for solving the transonic flow past a finite wing mounted inside a wind tunnel. Flow past the same wing in free air was also computed to assess the effect of wind-tunnel walls on such flows. Numerical efficiency is enhanced through vectorization of the computer code. A Cyber 205 computer with 32 million words of internal memory was used for these computations.

  11. A multistage time-stepping scheme for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, E.

    1985-01-01

    A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.

  12. Some aspects of algorithm performance and modeling in transient analysis of structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Robinson, J. C.

    1981-01-01

    The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit algorithms with variable time steps, known as the GEAR package, is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite-element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the wing of the space shuttle orbiter. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures).

  13. High speed finite element simulations on the graphics card

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthwaite, P.; Lowe, M. J. S.

    A software package is developed to perform explicit time domain finite element simulations of ultrasonic propagation on the graphical processing unit, using Nvidia’s CUDA. Of critical importance for this problem is the arrangement of nodes in memory, allowing data to be loaded efficiently and minimising communication between the independently executed blocks of threads. The initial stage of memory arrangement is partitioning the mesh; both a well established ‘greedy’ partitioner and a new, more efficient ‘aligned’ partitioner are investigated. A method is then developed to efficiently arrange the memory within each partition. The technique is compared to a commercial CPU equivalent,more » demonstrating an overall speedup of at least 100 for a non-destructive testing weld model.« less

  14. Enforcing the Courant–Friedrichs–Lewy condition in explicitly conservative local time stepping schemes

    DOE PAGES

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-01-30

    In this study, an optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubicmore » "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a condition on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.« less

  15. A parallel finite element procedure for contact-impact problems using edge-based smooth triangular element and GPU

    NASA Astrophysics Data System (ADS)

    Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang

    2018-04-01

    The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.

  16. Interrelations between different canonical descriptions of dissipative systems

    NASA Astrophysics Data System (ADS)

    Schuch, D.; Guerrero, J.; López-Ruiz, F. F.; Aldaya, V.

    2015-04-01

    There are many approaches for the description of dissipative systems coupled to some kind of environment. This environment can be described in different ways; only effective models are being considered here. In the Bateman model, the environment is represented by one additional degree of freedom and the corresponding momentum. In two other canonical approaches, no environmental degree of freedom appears explicitly, but the canonical variables are connected with the physical ones via non-canonical transformations. The link between the Bateman approach and those without additional variables is achieved via comparison with a canonical approach using expanding coordinates, as, in this case, both Hamiltonians are constants of motion. This leads to constraints that allow for the elimination of the additional degree of freedom in the Bateman approach. These constraints are not unique. Several choices are studied explicitly, and the consequences for the physical interpretation of the additional variable in the Bateman model are discussed.

  17. A superstring field theory for supergravity

    NASA Astrophysics Data System (ADS)

    Reid-Edwards, R. A.; Riccombeni, D. A.

    2017-09-01

    A covariant closed superstring field theory, equivalent to classical tendimensional Type II supergravity, is presented. The defining conformal field theory is the ambitwistor string worldsheet theory of Mason and Skinner. This theory is known to reproduce the scattering amplitudes of Cachazo, He and Yuan in which the scattering equations play an important role and the string field theory naturally incorporates these results. We investigate the operator formalism description of the ambitwsitor string and propose an action for the string field theory of the bosonic and supersymmetric theories. The correct linearised gauge symmetries and spacetime actions are explicitly reproduced and evidence is given that the action is correct to all orders. The focus is on the NeveuSchwarz sector and the explicit description of tree level perturbation theory about flat spacetime. Application of the string field theory to general supergravity backgrounds and the inclusion of the Ramond sector are briefly discussed.

  18. WScore: A Flexible and Accurate Treatment of Explicit Water Molecules in Ligand-Receptor Docking.

    PubMed

    Murphy, Robert B; Repasky, Matthew P; Greenwood, Jeremy R; Tubert-Brohman, Ivan; Jerome, Steven; Annabhimoju, Ramakrishna; Boyles, Nicholas A; Schmitz, Christopher D; Abel, Robert; Farid, Ramy; Friesner, Richard A

    2016-05-12

    We have developed a new methodology for protein-ligand docking and scoring, WScore, incorporating a flexible description of explicit water molecules. The locations and thermodynamics of the waters are derived from a WaterMap molecular dynamics simulation. The water structure is employed to provide an atomic level description of ligand and protein desolvation. WScore also contains a detailed model for localized ligand and protein strain energy and integrates an MM-GBSA scoring component with these terms to assess delocalized strain of the complex. Ensemble docking is used to take into account induced fit effects on the receptor conformation, and protein reorganization free energies are assigned via fitting to experimental data. The performance of the method is evaluated for pose prediction, rank ordering of self-docked complexes, and enrichment in virtual screening, using a large data set of PDB complexes and compared with the Glide SP and Glide XP models; significant improvements are obtained.

  19. Interoperability between phenotype and anatomy ontologies.

    PubMed

    Hoehndorf, Robert; Oellrich, Anika; Rebholz-Schuhmann, Dietrich

    2010-12-15

    Phenotypic information is important for the analysis of the molecular mechanisms underlying disease. A formal ontological representation of phenotypic information can help to identify, interpret and infer phenotypic traits based on experimental findings. The methods that are currently used to represent data and information about phenotypes fail to make the semantics of the phenotypic trait explicit and do not interoperate with ontologies of anatomy and other domains. Therefore, valuable resources for the analysis of phenotype studies remain unconnected and inaccessible to automated analysis and reasoning. We provide a framework to formalize phenotypic descriptions and make their semantics explicit. Based on this formalization, we provide the means to integrate phenotypic descriptions with ontologies of other domains, in particular anatomy and physiology. We demonstrate how our framework leads to the capability to represent disease phenotypes, perform powerful queries that were not possible before and infer additional knowledge. http://bioonto.de/pmwiki.php/Main/PheneOntology.

  20. Descriptive Characteristics of Surface Water Quality in Hong Kong by a Self-Organising Map

    PubMed Central

    An, Yan; Zou, Zhihong; Li, Ranran

    2016-01-01

    In this study, principal component analysis (PCA) and a self-organising map (SOM) were used to analyse a complex dataset obtained from the river water monitoring stations in the Tolo Harbor and Channel Water Control Zone (Hong Kong), covering the period of 2009–2011. PCA was initially applied to identify the principal components (PCs) among the nonlinear and complex surface water quality parameters. SOM followed PCA, and was implemented to analyze the complex relationships and behaviors of the parameters. The results reveal that PCA reduced the multidimensional parameters to four significant PCs which are combinations of the original ones. The positive and inverse relationships of the parameters were shown explicitly by pattern analysis in the component planes. It was found that PCA and SOM are efficient tools to capture and analyze the behavior of multivariable, complex, and nonlinear related surface water quality data. PMID:26761018

  1. Descriptive Characteristics of Surface Water Quality in Hong Kong by a Self-Organising Map.

    PubMed

    An, Yan; Zou, Zhihong; Li, Ranran

    2016-01-08

    In this study, principal component analysis (PCA) and a self-organising map (SOM) were used to analyse a complex dataset obtained from the river water monitoring stations in the Tolo Harbor and Channel Water Control Zone (Hong Kong), covering the period of 2009-2011. PCA was initially applied to identify the principal components (PCs) among the nonlinear and complex surface water quality parameters. SOM followed PCA, and was implemented to analyze the complex relationships and behaviors of the parameters. The results reveal that PCA reduced the multidimensional parameters to four significant PCs which are combinations of the original ones. The positive and inverse relationships of the parameters were shown explicitly by pattern analysis in the component planes. It was found that PCA and SOM are efficient tools to capture and analyze the behavior of multivariable, complex, and nonlinear related surface water quality data.

  2. Implementation and application of a gradient enhanced crystal plasticity model

    NASA Astrophysics Data System (ADS)

    Soyarslan, C.; Perdahcıoǧlu, E. S.; Aşık, E. E.; van den Boogaard, A. H.; Bargmann, S.

    2017-10-01

    A rate-independent crystal plasticity model is implemented in which description of the hardening of the material is given as a function of the total dislocation density. The evolution of statistically stored dislocations (SSDs) is described using a saturating type evolution law. The evolution of geometrically necessary dislocations (GNDs) on the other hand is described using the gradient of the plastic strain tensor in a non-local manner. The gradient of the incremental plastic strain tensor is computed explicitly during an implicit FE simulation after each converged step. Using the plastic strain tensor stored as state variables at each integration point and an efficient numerical algorithm to find the gradients, the GND density is obtained. This results in a weak coupling of the equilibrium solution and the gradient enhancement. The algorithm is applied to an academic test problem which considers growth of a cylindrical void in a single crystal matrix.

  3. Optimal mode transformations for linear-optical cluster-state generation

    DOE PAGES

    Uskov, Dmitry B.; Lougovski, Pavel; Alsing, Paul M.; ...

    2015-06-15

    In this paper, we analyze the generation of linear-optical cluster states (LOCSs) via sequential addition of one and two qubits. Existing approaches employ the stochastic linear-optical two-qubit controlled-Z (CZ) gate with success rate of 1/9 per operation. The question of optimality of the CZ gate with respect to LOCS generation has remained open. We report that there are alternative schemes to the CZ gate that are exponentially more efficient and show that sequential LOCS growth is indeed globally optimal. We find that the optimal cluster growth operation is a state transformation on a subspace of the full Hilbert space. Finally,more » we show that the maximal success rate of postselected entangling n photonic qubits or m Bell pairs into a cluster is (1/2) n-1 and (1/4) m-1, respectively, with no ancilla photons, and we give an explicit optical description of the optimal mode transformations.« less

  4. Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Dȩbski, Wojciech

    2008-07-01

    Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.

  5. Polynomial-time quantum algorithm for the simulation of chemical dynamics

    PubMed Central

    Kassal, Ivan; Jordan, Stephen P.; Love, Peter J.; Mohseni, Masoud; Aspuru-Guzik, Alán

    2008-01-01

    The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born–Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits. PMID:19033207

  6. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  7. Mean-Variance Portfolio Selection for Defined-Contribution Pension Funds with Stochastic Salary

    PubMed Central

    Zhang, Chubing

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier. PMID:24782667

  8. Cold fission description with constant and varying mass asymmetries

    NASA Astrophysics Data System (ADS)

    Duarte, S. B.; Rodríguez, O.; Tavares, O. A. P.; Gonçalves, M.; García, F.; Guzmán, F.

    1998-05-01

    Different descriptions for varying the mass asymmetry in the fragmentation process are used to calculate the cold fission barrier penetrability. The relevance of the appropriate choice for both the description of the prescission phase and inertia coefficient to unify alpha decay, cluster radioactivity, and spontaneous cold fission processes in the same theoretical framework is explicitly shown. We calculate the half-life of all possible partition modes of nuclei of A>200 following the most recent Mass Table by Audi and Wapstra. It is shown that if one uses the description in which the mass asymmetry is maintained constant during the fragmentation process, the experimental half-life values and mass yield of 234U cold fission are satisfactorily reproduced.

  9. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mignone, A.; Tzeferacos, P.; Zanni, C.

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory,more » or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.« less

  10. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  11. Heavy-light mesons in chiral AdS/QCD

    NASA Astrophysics Data System (ADS)

    Liu, Yizhuang; Zahed, Ismail

    2017-06-01

    We discuss a minimal holographic model for the description of heavy-light and light mesons with chiral symmetry, defined in a slab of AdS space. The model consists of a pair of chiral Yang-Mills and tachyon fields with specific boundary conditions that break spontaneously chiral symmetry in the infrared. The heavy-light spectrum and decay constants are evaluated explicitly. In the heavy mass limit the model exhibits both heavy-quark and chiral symmetry and allows for the explicit derivation of the one-pion axial couplings to the heavy-light mesons.

  12. Test-Case Generation using an Explicit State Model Checker Final Report

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Gao, Jimin

    2003-01-01

    In the project 'Test-Case Generation using an Explicit State Model Checker' we have extended an existing tools infrastructure for formal modeling to export Java code so that we can use the NASA Ames tool Java Pathfinder (JPF) for test case generation. We have completed a translator from our source language RSML(exp -e) to Java and conducted initial studies of how JPF can be used as a testing tool. In this final report, we provide a detailed description of the translation approach as implemented in our tools.

  13. An explicit asymptotic preserving low Froude scheme for the multilayer shallow water model with density stratification

    NASA Astrophysics Data System (ADS)

    Couderc, F.; Duran, A.; Vila, J.-P.

    2017-08-01

    We present an explicit scheme for a two-dimensional multilayer shallow water model with density stratification, for general meshes and collocated variables. The proposed strategy is based on a regularized model where the transport velocity in the advective fluxes is shifted proportionally to the pressure potential gradient. Using a similar strategy for the potential forces, we show the stability of the method in the sense of a discrete dissipation of the mechanical energy, in general multilayer and non-linear frames. These results are obtained at first-order in space and time and extended using a second-order MUSCL extension in space and a Heun's method in time. With the objective of minimizing the diffusive losses in realistic contexts, sufficient conditions are exhibited on the regularizing terms to ensure the scheme's linear stability at first and second-order in time and space. The other main result stands in the consistency with respect to the asymptotics reached at small and large time scales in low Froude regimes, which governs large-scale oceanic circulation. Additionally, robustness and well-balanced results for motionless steady states are also ensured. These stability properties tend to provide a very robust and efficient approach, easy to implement and particularly well suited for large-scale simulations. Some numerical experiments are proposed to highlight the scheme efficiency: an experiment of fast gravitational modes, a smooth surface wave propagation, an initial propagating surface water elevation jump considering a non-trivial topography, and a last experiment of slow Rossby modes simulating the displacement of a baroclinic vortex subject to the Coriolis force.

  14. Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Venkatapathy, Ethiraj

    1995-01-01

    Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  15. Molecular simulation of water and hydration effects in different environments: challenges and developments for DFTB based models.

    PubMed

    Goyal, Puja; Qian, Hu-Jun; Irle, Stephan; Lu, Xiya; Roston, Daniel; Mori, Toshifumi; Elstner, Marcus; Cui, Qiang

    2014-09-25

    We discuss the description of water and hydration effects that employs an approximate density functional theory, DFTB3, in either a full QM or QM/MM framework. The goal is to explore, with the current formulation of DFTB3, the performance of this method for treating water in different chemical environments, the magnitude and nature of changes required to improve its performance, and factors that dictate its applicability to reactions in the condensed phase in a QM/MM framework. A relatively minor change (on the scale of kBT) in the O-H repulsive potential is observed to substantially improve the structural properties of bulk water under ambient conditions; modest improvements are also seen in dynamic properties of bulk water. This simple change also improves the description of protonated water clusters, a solvated proton, and to a more limited degree, a solvated hydroxide. By comparing results from DFTB3 models that differ in the description of water, we confirm that proton transfer energetics are adequately described by the standard DFTB3/3OB model for meaningful mechanistic analyses. For QM/MM applications, a robust parametrization of QM-MM interactions requires an explicit consideration of condensed phase properties, for which an efficient sampling technique was developed recently and is reviewed here. The discussions help make clear the value and limitations of DFTB3 based simulations, as well as the developments needed to further improve the accuracy and transferability of the methodology.

  16. Representation of research hypotheses

    PubMed Central

    2011-01-01

    Background Hypotheses are now being automatically produced on an industrial scale by computers in biology, e.g. the annotation of a genome is essentially a large set of hypotheses generated by sequence similarity programs; and robot scientists enable the full automation of a scientific investigation, including generation and testing of research hypotheses. Results This paper proposes a logically defined way for recording automatically generated hypotheses in machine amenable way. The proposed formalism allows the description of complete hypotheses sets as specified input and output for scientific investigations. The formalism supports the decomposition of research hypotheses into more specialised hypotheses if that is required by an application. Hypotheses are represented in an operational way – it is possible to design an experiment to test them. The explicit formal description of research hypotheses promotes the explicit formal description of the results and conclusions of an investigation. The paper also proposes a framework for automated hypotheses generation. We demonstrate how the key components of the proposed framework are implemented in the Robot Scientist “Adam”. Conclusions A formal representation of automatically generated research hypotheses can help to improve the way humans produce, record, and validate research hypotheses. Availability http://www.aber.ac.uk/en/cs/research/cb/projects/robotscientist/results/ PMID:21624164

  17. Stochastic simulation of reaction-diffusion systems: A fluctuating-hydrodynamics approach

    NASA Astrophysics Data System (ADS)

    Kim, Changho; Nonaka, Andy; Bell, John B.; Garcia, Alejandro L.; Donev, Aleksandar

    2017-03-01

    We develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuating hydrodynamics (FHD). For hydrodynamic systems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poisson fluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules, to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlögl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamic fluctuations to the formation of a two-dimensional Turing-like pattern and examine the effect of fluctuations on three-dimensional chemical front propagation. By comparing stochastic simulations to deterministic reaction-diffusion simulations, we show that fluctuations accelerate pattern formation in spatially homogeneous systems and lead to a qualitatively different disordered pattern behind a traveling wave.

  18. Stochastic simulation of reaction-diffusion systems: A fluctuating-hydrodynamics approach

    DOE PAGES

    Kim, Changho; Nonaka, Andy; Bell, John B.; ...

    2017-03-24

    Here, we develop numerical methods for stochastic reaction-diffusion systems based on approaches used for fluctuating hydrodynamics (FHD). For hydrodynamic systems, the FHD formulation is formally described by stochastic partial differential equations (SPDEs). In the reaction-diffusion systems we consider, our model becomes similar to the reaction-diffusion master equation (RDME) description when our SPDEs are spatially discretized and reactions are modeled as a source term having Poisson fluctuations. However, unlike the RDME, which becomes prohibitively expensive for an increasing number of molecules, our FHD-based description naturally extends from the regime where fluctuations are strong, i.e., each mesoscopic cell has few (reactive) molecules,more » to regimes with moderate or weak fluctuations, and ultimately to the deterministic limit. By treating diffusion implicitly, we avoid the severe restriction on time step size that limits all methods based on explicit treatments of diffusion and construct numerical methods that are more efficient than RDME methods, without compromising accuracy. Guided by an analysis of the accuracy of the distribution of steady-state fluctuations for the linearized reaction-diffusion model, we construct several two-stage (predictor-corrector) schemes, where diffusion is treated using a stochastic Crank-Nicolson method, and reactions are handled by the stochastic simulation algorithm of Gillespie or a weakly second-order tau leaping method. We find that an implicit midpoint tau leaping scheme attains second-order weak accuracy in the linearized setting and gives an accurate and stable structure factor for a time step size of an order of magnitude larger than the hopping time scale of diffusing molecules. We study the numerical accuracy of our methods for the Schlögl reaction-diffusion model both in and out of thermodynamic equilibrium. We demonstrate and quantify the importance of thermodynamic fluctuations to the formation of a two-dimensional Turing-like pattern and examine the effect of fluctuations on three-dimensional chemical front propagation. Furthermore, by comparing stochastic simulations to deterministic reaction-diffusion simulations, we show that fluctuations accelerate pattern formation in spatially homogeneous systems and lead to a qualitatively different disordered pattern behind a traveling wave.« less

  19. Aeroelastic Model Structure Computation for Envelope Expansion

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2007-01-01

    Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.

  20. Noncommutative gerbes and deformation quantization

    NASA Astrophysics Data System (ADS)

    Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter

    2010-11-01

    We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.

  1. Charge Carriers Modulate the Bonding of Semiconductor Nanoparticle Dopants As Revealed by Time-Resolved X-ray Spectroscopy

    DOE PAGES

    Hassan, Asra; Zhang, Xiaoyi; Liu, Xiaohan; ...

    2017-08-28

    Understanding the electronic structure of doped semiconductors is essential to realize advancements in electronics and in the rational design of nanoscale devices. Here, we report the results of time-resolved X-ray absorption studies on copper-doped cadmium sulfide nanoparticles that provide an explicit description of the electronic dynamics of the dopants. The interaction of a dopant ion and an excess charge carrier is unambiguously observed via monitoring the oxidation state. The experimental data combined with DFT calculations demonstrate that dopant bonding to the host matrix is modulated by its interaction with charge carriers. Additionally, the transient photoluminescence and the kinetics of dopantmore » oxidation reveal the presence of two types of surface-bound ions that create mid-gap states.« less

  2. Charge Carriers Modulate the Bonding of Semiconductor Nanoparticle Dopants As Revealed by Time-Resolved X-ray Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Asra; Zhang, Xiaoyi; Liu, Xiaohan

    Understanding the electronic structure of doped semiconductors is essential to realize advancements in electronics and in the rational design of nanoscale devices. Here, we report the results of time-resolved X-ray absorption studies on copper-doped cadmium sulfide nanoparticles that provide an explicit description of the electronic dynamics of the dopants. The interaction of a dopant ion and an excess charge carrier is unambiguously observed via monitoring the oxidation state. The experimental data combined with DFT calculations demonstrate that dopant bonding to the host matrix is modulated by its interaction with charge carriers. Additionally, the transient photoluminescence and the kinetics of dopantmore » oxidation reveal the presence of two types of surface-bound ions that create mid-gap states.« less

  3. Electrostatic Origin of Salt-Induced Nucleosome Array Compaction

    PubMed Central

    Korolev, Nikolay; Allahverdi, Abdollah; Yang, Ye; Fan, Yanping; Lyubartsev, Alexander P.; Nordenskiöld, Lars

    2010-01-01

    The physical mechanism of the folding and unfolding of chromatin is fundamentally related to transcription but is incompletely characterized and not fully understood. We experimentally and theoretically studied chromatin compaction by investigating the salt-mediated folding of an array made of 12 positioning nucleosomes with 177 bp repeat length. Sedimentation velocity measurements were performed to monitor the folding provoked by addition of cations Na+, K+, Mg2+, Ca2+, spermidine3+, Co(NH3)63+, and spermine4+. We found typical polyelectrolyte behavior, with the critical concentration of cation needed to bring about maximal folding covering a range of almost five orders of magnitude (from 2 μM for spermine4+ to 100 mM for Na+). A coarse-grained model of the nucleosome array based on a continuum dielectric description and including the explicit presence of mobile ions and charged flexible histone tails was used in computer simulations to investigate the cation-mediated compaction. The results of the simulations with explicit ions are in general agreement with the experimental data, whereas simple Debye-Hückel models are intrinsically incapable of describing chromatin array folding by multivalent cations. We conclude that the theoretical description of the salt-induced chromatin folding must incorporate explicit mobile ions that include ion correlation and ion competition effects. PMID:20858435

  4. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  5. High incorrect use of the standard error of the mean (SEM) in original articles in three cardiovascular journals evaluated for 2012.

    PubMed

    Wullschleger, Marcel; Aghlmandi, Soheila; Egger, Marcel; Zwahlen, Marcel

    2014-01-01

    In biomedical journals authors sometimes use the standard error of the mean (SEM) for data description, which has been called inappropriate or incorrect. To assess the frequency of incorrect use of SEM in articles in three selected cardiovascular journals. All original journal articles published in 2012 in Cardiovascular Research, Circulation: Heart Failure and Circulation Research were assessed by two assessors for inappropriate use of SEM when providing descriptive information of empirical data. We also assessed whether the authors state in the methods section that the SEM will be used for data description. Of 441 articles included in this survey, 64% (282 articles) contained at least one instance of incorrect use of the SEM, with two journals having a prevalence above 70% and "Circulation: Heart Failure" having the lowest value (27%). In 81% of articles with incorrect use of SEM, the authors had explicitly stated that they use the SEM for data description and in 89% SEM bars were also used instead of 95% confidence intervals. Basic science studies had a 7.4-fold higher level of inappropriate SEM use (74%) than clinical studies (10%). The selection of the three cardiovascular journals was based on a subjective initial impression of observing inappropriate SEM use. The observed results are not representative for all cardiovascular journals. In three selected cardiovascular journals we found a high level of inappropriate SEM use and explicit methods statements to use it for data description, especially in basic science studies. To improve on this situation, these and other journals should provide clear instructions to authors on how to report descriptive information of empirical data.

  6. A time-domain method for prediction of noise radiated from supersonic rotating sources in a moving medium

    NASA Astrophysics Data System (ADS)

    Huang, Zhongjie; Siozos-Rousoulis, Leonidas; De Troyer, Tim; Ghorbaniasl, Ghader

    2018-02-01

    This paper presents a time-domain method for noise prediction of supersonic rotating sources in a moving medium. The proposed approach can be interpreted as an extensive time-domain solution for the convected permeable Ffowcs Williams and Hawkings equation, which is capable of avoiding the Doppler singularity. The solution requires special treatment for construction of the emission surface. The derived formula can explicitly and efficiently account for subsonic uniform constant flow effects on radiated noise. Implementation of the methodology is realized through the Isom thickness noise case and high-speed impulsive noise prediction from helicopter rotors.

  7. Asynchronous variational integration using continuous assumed gradient elements.

    PubMed

    Wolff, Sebastian; Bucher, Christian

    2013-03-01

    Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.

  8. Awareness-based game-theoretic space resource management

    NASA Astrophysics Data System (ADS)

    Chen, Genshe; Chen, Huimin; Pham, Khanh; Blasch, Erik; Cruz, Jose B., Jr.

    2009-05-01

    Over recent decades, the space environment becomes more complex with a significant increase in space debris and a greater density of spacecraft, which poses great difficulties to efficient and reliable space operations. In this paper we present a Hierarchical Sensor Management (HSM) method to space operations by (a) accommodating awareness modeling and updating and (b) collaborative search and tracking space objects. The basic approach is described as follows. Firstly, partition the relevant region of interest into district cells. Second, initialize and model the dynamics of each cell with awareness and object covariance according to prior information. Secondly, explicitly assign sensing resources to objects with user specified requirements. Note that when an object has intelligent response to the sensing event, the sensor assigned to observe an intelligent object may switch from time-to-time between a strong, active signal mode and a passive mode to maximize the total amount of information to be obtained over a multi-step time horizon and avoid risks. Thirdly, if all explicitly specified requirements are satisfied and there are still more sensing resources available, we assign the additional sensing resources to objects without explicitly specified requirements via an information based approach. Finally, sensor scheduling is applied to each sensor-object or sensor-cell pair according to the object type. We demonstrate our method with realistic space resources management scenario using NASA's General Mission Analysis Tool (GMAT) for space object search and track with multiple space borne observers.

  9. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  10. Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2016-06-01

    This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.

  11. 3D transient electromagnetic simulation using a modified correspondence principle for wave and diffusion fields

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Ji, Y.; Egbert, G. D.

    2015-12-01

    The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.

  12. Spin Nernst effect and intrinsic magnetization in two-dimensional Dirac materials

    NASA Astrophysics Data System (ADS)

    Gusynin, V. P.; Sharapov, S. G.; Varlamov, A. A.

    2015-05-01

    We begin with a brief description of the role of the Nernst-Ettingshausen effect in the studies of the high-temperature superconductors and Dirac materials such as graphene. The theoretical analysis of the NE effect is involved because the standard Kubo formalism has to be modified by the presence of magnetization currents in order to satisfy the third law of thermodynamics. A new generation of the low-buckled Dirac materials is expected to have a strong spin Nernst effect that represents the spintronics analog of the NE effect. These Dirac materials can be considered as made of two independent electron subsystems of the two-component gapped Dirac fermions. For each subsystem the gap breaks a time-reversal symmetry and thus plays a role of an effective magnetic field. We explicitly demonstrate how the correct thermoelectric coefficient emerges both by the explicit calculation of the magnetization and by a formal cancelation in the modified Kubo formula. We conclude by showing that the nontrivial dependences of the spin Nersnt signal on the carrier concentration and electric field applied are expected in silicene and other low-buckled Dirac materials.

  13. Molecular Dynamics based on a Generalized Born solvation model: application to protein folding

    NASA Astrophysics Data System (ADS)

    Onufriev, Alexey

    2004-03-01

    An accurate description of the aqueous environment is essential for realistic biomolecular simulations, but may become very expensive computationally. We have developed a version of the Generalized Born model suitable for describing large conformational changes in macromolecules. The model represents the solvent implicitly as continuum with the dielectric properties of water, and include charge screening effects of salt. The computational cost associated with the use of this model in Molecular Dynamics simulations is generally considerably smaller than the cost of representing water explicitly. Also, compared to traditional Molecular Dynamics simulations based on explicit water representation, conformational changes occur much faster in implicit solvation environment due to the absence of viscosity. The combined speed-up allow one to probe conformational changes that occur on much longer effective time-scales. We apply the model to folding of a 46-residue three helix bundle protein (residues 10-55 of protein A, PDB ID 1BDD). Starting from an unfolded structure at 450 K, the protein folds to the lowest energy state in 6 ns of simulation time, which takes about a day on a 16 processor SGI machine. The predicted structure differs from the native one by 2.4 A (backbone RMSD). Analysis of the structures seen on the folding pathway reveals details of the folding process unavailable form experiment.

  14. Multiscale Simulations of Protein Landscapes: Using Coarse Grained Models as Reference Potentials to Full Explicit Models

    PubMed Central

    Messer, Benjamin M.; Roca, Maite; Chu, Zhen T.; Vicatos, Spyridon; Kilshtain, Alexandra Vardi; Warshel, Arieh

    2009-01-01

    Evaluating the free energy landscape of proteins and the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of simplified coarse grained (CG) folding models offers an effective way of sampling the landscape but such a treatment, however, may not give the correct description of the effect of the actual protein residues. A general way around this problem that has been put forward in our early work (Fan et al, Theor Chem Acc (1999) 103:77-80) uses the CG model as a reference potential for free energy calculations of different properties of the explicit model. This method is refined and extended here, focusing on improving the electrostatic treatment and on demonstrating key applications. This application includes: evaluation of changes of folding energy upon mutations, calculations of transition states binding free energies (which are crucial for rational enzyme design), evaluation of catalytic landscape and simulation of the time dependent responses to pH changes. Furthermore, the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins is discussed. PMID:20052756

  15. Theory of particle detection and multiplicity counting with dead time effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pal, L.; Pazsit, I.

    The subject of this paper is the investigation of the effect of the dead time on the statistics of the particle detection process. A theoretical treatment is provided with the application of the methods of renewal theory. The detector efficiency and various types of the dead time are accounted for. Exact analytical results are derived for the probability distribution functions, the expectations and the variances of the number of detected particles. Explicit solutions are given for a few representative cases. The results should serve for the evaluation of the measurements in view of the dead time correction effects for themore » higher moments of the detector counts. (authors)« less

  16. Age differences in experiential and deliberative processes in unambiguous and ambiguous decision making.

    PubMed

    Huang, Yumi H; Wood, Stacey; Berger, Dale E; Hanoch, Yaniv

    2015-09-01

    Older adults experience declines in deliberative decisional capacities, while their affective or experiential abilities tend to remain intact (Peters & Bruine de Bruin, 2012). The current study used this framework to investigate age differences in description-based and experience-based decision-making tasks. Description-based tasks emphasize deliberative processing by allowing decision makers to analyze explicit descriptions of choice-reward information. Experience-based tasks emphasize affective or experiential processing because they lack the explicit choice-reward information, forcing decision makers to rely on feelings and information derived from past experiences. This study used the Columbia Card Task (CCT) as a description-based task where probability information is provided and the Iowa Gambling Task (IGT) as an experience-based task, where it is not. As predicted, compared to younger adults (N = 65), older adults (N = 65) performed more poorly on the CCT but performed similarly on the IGT. Deliberative capacities (i.e., executive control and numeracy abilities) explained the relationship between age and performance on the CCT, suggesting that age-related differences in description-based decision-making tasks are related to declines in deliberative capacities. However, deliberative capacities were not associated with performance on the IGT for either older or younger adults. Nevertheless, on the IGT, older adults reported more use of affect-based strategies versus deliberative strategies, whereas younger adults reported similar use of these strategies. This finding offers partial support for the idea that decision-making tasks that rely on deliberate processing are more likely to demonstrate age effects than those that are more experiential. (c) 2015 APA, all rights reserved).

  17. The Development, Description and Appraisal of an Emergent Multimethod Research Design to Study Workforce Changes in Integrated Care Interventions

    PubMed Central

    Luijkx, Katrien; Calciolari, Stefano; González-Ortiz, Laura G.

    2017-01-01

    Introduction: In this paper, we provide a detailed and explicit description of the processes and decisions underlying and shaping the emergent multimethod research design of our study on workforce changes in integrated chronic care. Theory and methods: The study was originally planned as mixed method research consisting of a preliminary literature review and quantitative check of these findings via a Delphi panel. However, when the findings of the literature review were not appropriate for quantitative confirmation, we chose to continue our qualitative exploration of the topic via qualitative questionnaires and secondary analysis of two best practice case reports. Results: The resulting research design is schematically described as an emergent and interactive multimethod design with multiphase combination timing. In doing so, we provide other researchers with a set of theory- and experience-based options to develop their own multimethod research and provide an example for more detailed and structured reporting of emergent designs. Conclusion and discussion: We argue that the terminology developed for the description of mixed methods designs should also be used for multimethod designs such as the one presented here. PMID:29042843

  18. A spatially explicit model for estimating risks of pesticide exposure on bird populations

    EPA Science Inventory

    Product Description (FY17 Key Product): Current ecological risk assessment for pesticides under FIFRA relies on risk quotients (RQs), which suffer from significant methodological shortcomings. For example, RQs do not integrate adverse effects arising from multiple demographic pr...

  19. Modular operads and the quantum open-closed homotopy algebra

    NASA Astrophysics Data System (ADS)

    Doubek, Martin; Jurčo, Branislav; Münster, Korbinian

    2015-12-01

    We verify that certain algebras appearing in string field theory are algebras over Feynman transform of modular operads which we describe explicitly. Equivalent description in terms of solutions of generalized BV master equations are explained from the operadic point of view.

  20. Efficient and accurate numerical schemes for a hydro-dynamically coupled phase field diblock copolymer model

    NASA Astrophysics Data System (ADS)

    Cheng, Qing; Yang, Xiaofeng; Shen, Jie

    2017-07-01

    In this paper, we consider numerical approximations of a hydro-dynamically coupled phase field diblock copolymer model, in which the free energy contains a kinetic potential, a gradient entropy, a Ginzburg-Landau double well potential, and a long range nonlocal type potential. We develop a set of second order time marching schemes for this system using the "Invariant Energy Quadratization" approach for the double well potential, the projection method for the Navier-Stokes equation, and a subtle implicit-explicit treatment for the stress and convective term. The resulting schemes are linear and lead to symmetric positive definite systems at each time step, thus they can be efficiently solved. We further prove that these schemes are unconditionally energy stable. Various numerical experiments are performed to validate the accuracy and energy stability of the proposed schemes.

  1. Mean first passage time for random walk on dual structure of dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong; Zhou, Shuigeng

    2014-12-01

    The random walk approach has recently been widely employed to study the relations between the underlying structure and dynamic of complex systems. The mean first-passage time (MFPT) for random walks is a key index to evaluate the transport efficiency in a given system. In this paper we study analytically the MFPT in a dual structure of dendrimer network, Husimi cactus, which has different application background and different structure (contains loops) from dendrimer. By making use of the iterative construction, we explicitly determine both the partial mean first-passage time (PMFT, the average of MFPTs to a given target) and the global mean first-passage time (GMFT, the average of MFPTs over all couples of nodes) on Husimi cactus. The obtained closed-form results show that PMFPT and EMFPT follow different scaling with the network order, suggesting that the target location has essential influence on the transport efficiency. Finally, the impact that loop structure could bring is analyzed and discussed.

  2. A multi-band, multi-level, multi-electron model for efficient FDTD simulations of electromagnetic interactions with semiconductor quantum wells

    NASA Astrophysics Data System (ADS)

    Ravi, Koustuban; Wang, Qian; Ho, Seng-Tiong

    2015-08-01

    We report a new computational model for simulations of electromagnetic interactions with semiconductor quantum well(s) (SQW) in complex electromagnetic geometries using the finite-difference time-domain method. The presented model is based on an approach of spanning a large number of electron transverse momentum states in each SQW sub-band (multi-band) with a small number of discrete multi-electron states (multi-level, multi-electron). This enables accurate and efficient two-dimensional (2-D) and three-dimensional (3-D) simulations of nanophotonic devices with SQW active media. The model includes the following features: (1) Optically induced interband transitions between various SQW conduction and heavy-hole or light-hole sub-bands are considered. (2) Novel intra sub-band and inter sub-band transition terms are derived to thermalize the electron and hole occupational distributions to the correct Fermi-Dirac distributions. (3) The terms in (2) result in an explicit update scheme which circumvents numerically cumbersome iterative procedures. This significantly augments computational efficiency. (4) Explicit update terms to account for carrier leakage to unconfined states are derived, which thermalize the bulk and SQW populations to a common quasi-equilibrium Fermi-Dirac distribution. (5) Auger recombination and intervalence band absorption are included. The model is validated by comparisons to analytic band-filling calculations, simulations of SQW optical gain spectra, and photonic crystal lasers.

  3. Cohesive phase-field fracture and a PDE constrained optimization approach to fracture inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tupek, Michael R.

    2016-06-30

    In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- putmore » parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.« less

  4. Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 1: Analysis description

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. The governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models are described in detail.

  5. Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Bui, Trong T.

    1993-01-01

    A computer code called Proteus 3D has been developed to solve the three dimensional, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort has been to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation have been emphasized. The governing equations are solved in generalized non-orthogonal body-fitted coordinates by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. It describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.

  6. How does subsurface retain and release stored water? An explicit estimation of young water fraction and mean transit time

    NASA Astrophysics Data System (ADS)

    Ameli, Ali; McDonnell, Jeffrey; Laudon, Hjalmar; Bishop, Kevin

    2017-04-01

    The stable isotopes of water have served science well as hydrological tracers which have demonstrated that there is often a large component of "old" water in stream runoff. It has been more problematic to define the full transit time distribution of that stream water. Non-linear mixing of previous precipitation signals that is stored for extended periods and slowly travel through the subsurface before reaching the stream results in a large range of possible transit times. It difficult to find tracers can represent this, especially if all that one has is data on the precipitation input and the stream runoff. In this paper, we explicitly characterize this "old water" displacement using a novel quasi-steady physically-based flow and transport model in the well-studied S-Transect hillslope in Sweden where the concentration of hydrological tracers in the subsurface and stream has been measured. We explore how subsurface conductivity profile impacts the characteristics of old water displacement, and then test these scenarios against the observed dynamics of conservative hydrological tracers in both the stream and subsurface. This work explores the efficiency of convolution-based approaches in the estimation of stream "young water" fraction and time-variant mean transit times. We also suggest how celerity and velocity differ with landscape structure

  7. A Process Model of Principal Selection.

    ERIC Educational Resources Information Center

    Flanigan, J. L.; And Others

    A process model to assist school district superintendents in the selection of principals is presented in this paper. Components of the process are described, which include developing an action plan, formulating an explicit job description, advertising, assessing candidates' philosophy, conducting interview analyses, evaluating response to stress,…

  8. [Reception of research in the natural sciences in middle Germany at the Padua University betwee 1770 and 1820].

    PubMed

    Breidbach, Olaf; Frigo, Gian Franco

    2004-01-01

    The German literature on natural sciences that was present in the public libraries in Padua between 1770 and 1820 is described. The citations of German authors in the publications of Paduan naturalists of that time and of the textbooks used in Padua University are outlined. German journals on natural sciences available in Venice and Padua and Italian translations of German monographs of that time are also documented. With the foundation of the Italian Empire by Napoleon, the organization of lectures and research in the University of Padua changed drastically. In consequence, the reception of chemistry and physics was exclusively directed to France. In the descriptive natural sciences the earlier German traditions prevailed. Therein, however, Paduan sciences adopted the earlier descriptive traditions that already existed at the end of the 18th century and did not respond to the new developments in German functional morphology and physiology. Jenensian naturalists, botanists and physicists who received attention in Padua around 1800 are described as part of the empiric tradition of Central Germany and not as followers of the speculative "Naturphilosophie". There is no explicit reference to romantic sciences.

  9. Underdamped scaled Brownian motion: (non-)existence of the overdamped limit in anomalous diffusion.

    PubMed

    Bodrova, Anna S; Chechkin, Aleksei V; Cherstvy, Andrey G; Safdari, Hadiseh; Sokolov, Igor M; Metzler, Ralf

    2016-07-27

    It is quite generally assumed that the overdamped Langevin equation provides a quantitative description of the dynamics of a classical Brownian particle in the long time limit. We establish and investigate a paradigm anomalous diffusion process governed by an underdamped Langevin equation with an explicit time dependence of the system temperature and thus the diffusion and damping coefficients. We show that for this underdamped scaled Brownian motion (UDSBM) the overdamped limit fails to describe the long time behaviour of the system and may practically even not exist at all for a certain range of the parameter values. Thus persistent inertial effects play a non-negligible role even at significantly long times. From this study a general questions on the applicability of the overdamped limit to describe the long time motion of an anomalously diffusing particle arises, with profound consequences for the relevance of overdamped anomalous diffusion models. We elucidate our results in view of analytical and simulations results for the anomalous diffusion of particles in free cooling granular gases.

  10. Integration of orthographic, conceptual, and episodic information on implicit and explicit tests.

    PubMed

    Weldon, M S; Massaro, D W

    1996-03-01

    An experiment was conducted to determine how orthographic and conceptual information are integrated during incidental and intentional retrieval. Subjects studied word lists with either a shallow (counting vowels) or deep (rating pleasantness) processing task, then received either an implicit or explicit word fragment completion (WFC) test. At test, word fragments contained 0, 1, 2, or 4 letters, and were accompanied by 0, 1, 2, or 3 semantically related words. On both the implicit and explicit tests, performance improved with increases in the numbers of letters and words. When semantic cues were presented with the word fragments, the implicit test became more conceptually drive. Still, conceptual processing had a larger effect in intentional than in incidental retrieval. The Fuzzy Logical Model of Perception (FLMP) provided a good description of how orthographic, semantic, and episodic information were combined during retrieval.

  11. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  12. Overcoming Geometry-Induced Stiffness with IMplicit-Explicit (IMEX) Runge-Kutta Algorithms on Unstructured Grids with Applications to CEM, CFD, and CAA

    NASA Technical Reports Server (NTRS)

    Kanevsky, Alex

    2004-01-01

    My goal is to develop and implement efficient, accurate, and robust Implicit-Explicit Runge-Kutta (IMEX RK) methods [9] for overcoming geometry-induced stiffness with applications to computational electromagnetics (CEM), computational fluid dynamics (CFD) and computational aeroacoustics (CAA). IMEX algorithms solve the non-stiff portions of the domain using explicit methods, and isolate and solve the more expensive stiff portions using implicit methods. Current algorithms in CEM can only simulate purely harmonic (up to lOGHz plane wave) EM scattering by fighter aircraft, which are assumed to be pure metallic shells, and cannot handle the inclusion of coatings, penetration into and radiation out of the aircraft. Efficient MEX RK methods could potentially increase current CEM capabilities by 1-2 orders of magnitude, allowing scientists and engineers to attack more challenging and realistic problems.

  13. Denial of illness in schizophrenia as a disturbance of self-reflection, self-perception and insight.

    PubMed

    Bedford, Nicholas J; David, Anthony S

    2014-01-01

    A substantial proportion of schizophrenia patients deny aspects of their illness to others, which may indicate a deeper disturbance of 'insight' and a self-reflection deficit. The present study used a 'levels-of-processing' mnemonic paradigm to examine whether such patients engage in particularly brief and shallow self-reflection during mental illness-related self-evaluation. 26 schizophrenia patients with either an overall acceptance or denial of their illness and 25 healthy controls made timed decisions about the self-descriptiveness, other-person-descriptiveness and phonological properties of mental illness traits, negative traits and positive traits, before completing surprise tests of retrieval for these traits. The acceptance patients and denial patients were particularly slow in their mental illness-related self-evaluation, indicating that they both found this exercise particularly difficult. Both patient groups displayed intact recognition but particularly reduced recall for self-evaluated traits in general, possibly indicating poor organisational processing during self-reflection. Lower recall for self-evaluated mental illness traits significantly correlated with higher denial of illness and higher illness-severity. Whilst explicit and implicit measures of self-perception corresponded in the healthy controls (who displayed an intact positive>negative 'self-positivity bias') and acceptance patients (who displayed a reduced self-positivity bias), the denial patients' self-positivity bias was explicitly intact but implicitly reduced. Schizophrenia patients, regardless of their illness-attitudes, have a particular deficit in recalling new self-related information that worsens with increasing denial of illness. This deficit may contribute towards rigid self-perception and disturbed self-awareness and insight in patients with denial of illness. © 2013.

  14. The neural correlates of implicit self-relevant processing in low self-esteem: an ERP study.

    PubMed

    Yang, Juan; Guan, Lili; Dedovic, Katarina; Qi, Mingming; Zhang, Qinglin

    2012-08-30

    Previous neuroimaging studies have shown that implicit and explicit processing of self-relevant (schematic) material elicit activity in many of the same brain regions. Electrophysiological studies on the neural processing of explicit self-relevant cues have generally supported the view that P300 is an index of attention to self-relevant stimuli; however, there has been no study to date investigating the temporal course of implicit self-relevant processing. The current study seeks to investigate the time course involved in implicit self-processing by comparing processing of self-relevant with non-self-relevant words while subjects are making a judgment about color of the words in an implicit attention task. Sixteen low self-esteem participants were examined using event-related potentials technology (ERP). We hypothesized that this implicit attention task would involve P2 component rather than the P300 component. Indeed, P2 component has been associated with perceptual analysis and attentional allocation and may be more likely to occur in unconscious conditions such as this task. Results showed that latency of P2 component, which indexes the time required for perceptual analysis, was more prolonged in processing self-relevant words compared to processing non-self-relevant words. Our results suggested that the judgment of the color of the word interfered with automatic processing of self-relevant information and resulted in less efficient processing of self-relevant word. Together with previous ERP studies examining processing of explicit self-relevant cues, these findings suggest that the explicit and the implicit processing of self-relevant information would not elicit the same ERP components. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  16. Preliminary study of the use of the STAR-100 computer for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Keller, J. D.; Jameson, A.

    1977-01-01

    An explicit method for solving the transonic small-disturbance potential equation is presented. This algorithm, which is suitable for the new vector-processor computers such as the CDC STAR-100, is compared to successive line over-relaxation (SLOR) on a simple test problem. The convergence rate of the explicit scheme is slower than that of SLOR, however, the efficiency of the explicit scheme on the STAR-100 computer is sufficient to overcome the slower convergence rate and allow an overall speedup compared to SLOR on the CYBER 175 computer.

  17. High-order noise filtering in nontrivial quantum logic gates.

    PubMed

    Green, Todd; Uys, Hermann; Biercuk, Michael J

    2012-07-13

    Treating the effects of a time-dependent classical dephasing environment during quantum logic operations poses a theoretical challenge, as the application of noncommuting control operations gives rise to both dephasing and depolarization errors that must be accounted for in order to understand total average error rates. We develop a treatment based on effective Hamiltonian theory that allows us to efficiently model the effect of classical noise on nontrivial single-bit quantum logic operations composed of arbitrary control sequences. We present a general method to calculate the ensemble-averaged entanglement fidelity to arbitrary order in terms of noise filter functions, and provide explicit expressions to fourth order in the noise strength. In the weak noise limit we derive explicit filter functions for a broad class of piecewise-constant control sequences, and use them to study the performance of dynamically corrected gates, yielding good agreement with brute-force numerics.

  18. A vectorized code for calculating laminar and turbulent hypersonic flows about blunt axisymmetric bodies at zero and small angles of attack

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Graves, R. A., Jr.

    1980-01-01

    A user's guide is provided for a computer code which calculates the laminar and turbulent hypersonic flows about blunt axisymmetric bodies, such as spherically blunted cones, hyperboloids, etc., at zero and small angles of attack. The code is written in STAR FORTRAN language for the CDC-STAR-100 computer. Time-dependent, viscous-shock-layer-type equations are used to describe the flow field. These equations are solved by an explicit, two-step, time asymptotic, finite-difference method. For the turbulent flow, a two-layer, eddy-viscosity model is used. The code provides complete flow-field properties including shock location, surface pressure distribution, surface heating rates, and skin-friction coefficients. This report contains descriptions of the input and output, the listing of the program, and a sample flow-field solution.

  19. Toward a Generative Model of the Teaching-Learning Process.

    ERIC Educational Resources Information Center

    McMullen, David W.

    Until the rise of cognitive psychology, models of the teaching-learning process (TLP) stressed external rather than internal variables. Models remained general descriptions until control theory introduced explicit system analyses. Cybernetic models emphasize feedback and adaptivity but give little attention to creativity. Research on artificial…

  20. Reconnections of Wave Vortex Lines

    ERIC Educational Resources Information Center

    Berry, M. V.; Dennis, M. R.

    2012-01-01

    When wave vortices, that is nodal lines of a complex scalar wavefunction in space, approach transversely, their typical crossing and reconnection is a two-stage process incorporating two well-understood elementary events in which locally coplanar hyperbolas switch branches. The explicit description of this reconnection is a pedagogically useful…

  1. A Domain Description Language for Data Processing

    NASA Technical Reports Server (NTRS)

    Golden, Keith

    2003-01-01

    We discuss an application of planning to data processing, a planning problem which poses unique challenges for domain description languages. We discuss these challenges and why the current PDDL standard does not meet them. We discuss DPADL (Data Processing Action Description Language), a language for describing planning domains that involve data processing. DPADL is a declarative, object-oriented language that supports constraints and embedded Java code, object creation and copying, explicit inputs and outputs for actions, and metadata descriptions of existing and desired data. DPADL is supported by the IMAGEbot system, which we are using to provide automation for an ecological forecasting application. We compare DPADL to PDDL and discuss changes that could be made to PDDL to make it more suitable for representing planning domains that involve data processing actions.

  2. A Semi-implicit Treatment of Porous Media in Steady-State CFD.

    PubMed

    Domaingo, Andreas; Langmayr, Daniel; Somogyi, Bence; Almbauer, Raimund

    There are many situations in computational fluid dynamics which require the definition of source terms in the Navier-Stokes equations. These source terms not only allow to model the physics of interest but also have a strong impact on the reliability, stability, and convergence of the numerics involved. Therefore, sophisticated numerical approaches exist for the description of such source terms. In this paper, we focus on the source terms present in the Navier-Stokes or Euler equations due to porous media-in particular the Darcy-Forchheimer equation. We introduce a method for the numerical treatment of the source term which is independent of the spatial discretization and based on linearization. In this description, the source term is treated in a fully implicit way whereas the other flow variables can be computed in an implicit or explicit manner. This leads to a more robust description in comparison with a fully explicit approach. The method is well suited to be combined with coarse-grid-CFD on Cartesian grids, which makes it especially favorable for accelerated solution of coupled 1D-3D problems. To demonstrate the applicability and robustness of the proposed method, a proof-of-concept example in 1D, as well as more complex examples in 2D and 3D, is presented.

  3. Construction of optimal resources for concatenated quantum protocols

    NASA Astrophysics Data System (ADS)

    Pirker, A.; Wallnöfer, J.; Briegel, H. J.; Dür, W.

    2017-06-01

    We consider the explicit construction of resource states for measurement-based quantum information processing. We concentrate on special-purpose resource states that are capable to perform a certain operation or task, where we consider unitary Clifford circuits as well as non-trace-preserving completely positive maps, more specifically probabilistic operations including Clifford operations and Pauli measurements. We concentrate on 1 →m and m →1 operations, i.e., operations that map one input qubit to m output qubits or vice versa. Examples of such operations include encoding and decoding in quantum error correction, entanglement purification, or entanglement swapping. We provide a general framework to construct optimal resource states for complex tasks that are combinations of these elementary building blocks. All resource states only contain input and output qubits, and are hence of minimal size. We obtain a stabilizer description of the resulting resource states, which we also translate into a circuit pattern to experimentally generate these states. In particular, we derive recurrence relations at the level of stabilizers as key analytical tool to generate explicit (graph) descriptions of families of resource states. This allows us to explicitly construct resource states for encoding, decoding, and syndrome readout for concatenated quantum error correction codes, code switchers, multiple rounds of entanglement purification, quantum repeaters, and combinations thereof (such as resource states for entanglement purification of encoded states).

  4. Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide

    2017-04-01

    Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.

  5. A simple method for finding explicit analytic transition densities of diffusion processes with general diploid selection.

    PubMed

    Song, Yun S; Steinrücken, Matthias

    2012-03-01

    The transition density function of the Wright-Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright-Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation-selection balance.

  6. A Simple Method for Finding Explicit Analytic Transition Densities of Diffusion Processes with General Diploid Selection

    PubMed Central

    Song, Yun S.; Steinrücken, Matthias

    2012-01-01

    The transition density function of the Wright–Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright–Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation–selection balance. PMID:22209899

  7. D-brane instantons and the effective field theory of flux compactifications

    NASA Astrophysics Data System (ADS)

    Uranga, Angel M.

    2009-01-01

    We provide a description of the effects of fluxes on euclidean D-brane instantons purely in terms of the 4d effective action. The effect corresponds to the dressing of the effective non-perturbative 4d effective vertex with 4d flux superpotential interactions, generated when the moduli fields made massive by the flux are integrated out. The description in terms of effective field theory allows a unified description of non-perturbative effects in all flux compactifications of a given underlying fluxless model, globally in the moduli space of the latter. It also allows us to describe explicitly the effects on D-brane instantons of fluxes with no microscopic description, like non-geometric fluxes. At the more formal level, the description has interesting connections with the bulk-boundary map of open-closed two-dimensional topological string theory, and with the Script N = 1 special geometry.

  8. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  9. Design of price incentives for adjunct policy goals in formula funding for hospitals and health services

    PubMed Central

    Duckett, Stephen J

    2008-01-01

    Background Hospital policy involves multiple objectives: efficiency of service delivery, pursuit of high quality care, promoting access. Funding policy based on hospital casemix has traditionally been considered to be only about promoting efficiency. Discussion Formula-based funding policy can be (and has been) used to pursue a range of policy objectives, not only efficiency. These are termed 'adjunct' goals. Strategies to incorporate adjunct goals into funding design must, implicitly or explicitly, address key decision choices outlined in this paper. Summary Policy must be clear and explicit about the behaviour to be rewarded; incentives must be designed so that all facilities with an opportunity to improve have an opportunity to benefit; the reward structure is stable and meaningful; and the funder monitors performance and gaming. PMID:18384694

  10. Brain substrates of implicit and explicit memory: the importance of concurrently acquired neural signals of both memory types.

    PubMed

    Voss, Joel L; Paller, Ken A

    2008-11-01

    A comprehensive understanding of human memory requires cognitive and neural descriptions of memory processes along with a conception of how memory processing drives behavioral responses and subjective experiences. One serious challenge to this endeavor is that an individual memory process is typically operative within a mix of other contemporaneous memory processes. This challenge is particularly disquieting in the context of implicit memory, which, unlike explicit memory, transpires without the subject necessarily being aware of memory retrieval. Neural correlates of implicit memory and neural correlates of explicit memory are often investigated in different experiments using very different memory tests and procedures. This strategy poses difficulties for elucidating the interactions between the two types of memory process that may result in explicit remembering, and for determining the extent to which certain neural processing events uniquely contribute to only one type of memory. We review recent studies that have succeeded in separately assessing neural correlates of both implicit memory and explicit memory within the same paradigm using event-related brain potentials (ERPs) and functional magnetic resonance imaging (fMRI), with an emphasis on studies from our laboratory. The strategies we describe provide a methodological framework for achieving valid assessments of memory processing, and the findings support an emerging conceptualization of the distinct neurocognitive events responsible for implicit and explicit memory.

  11. Generalization of the Bogoliubov-Zubarev Theorem for Dynamic Pressure to the Case of Compressibility

    NASA Astrophysics Data System (ADS)

    Rudoi, Yu. G.

    2018-01-01

    We present the motivation, formulation, and modified proof of the Bogoliubov-Zubarev theorem connecting the pressure of a dynamical object with its energy within the framework of a classical description and obtain a generalization of this theorem to the case of dynamical compressibility. In both cases, we introduce the volume of the object into consideration using a singular addition to the Hamiltonian function of the physical object, which allows using the concept of the Bogoliubov quasiaverage explicitly already on a dynamical level of description. We also discuss the relation to the same result known as the Hellmann-Feynman theorem in the framework of the quantum description of a physical object.

  12. Cooperative and Integrated Vehicle and Intersection Control for Energy Efficiency (CIVIC-E²)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Yunfei; Seliman, Salaheldeen M. S.; Wang, Enshu

    Recent advances in connected vehicle technologies enable vehicles and signal controllers to cooperate and improve the traffic management at intersections. This paper explores the opportunity for cooperative and integrated vehicle and intersection control for energy efficiency (CIVIC-E 2) to contribute to a more sustainable transportation system. We propose a two-level approach that jointly optimizes the traffic signal timing and vehicles' approach speed, with the objective being to minimize total energy consumption for all vehicles passing through an isolated intersection. More specifically, at the intersection level, a dynamic programming algorithm is designed to find the optimal signal timing by explicitly consideringmore » the arrival time and energy profile of each vehicle. At the vehicle level, a model predictive control strategy is adopted to ensure that vehicles pass through the intersection in a timely fashion. Our simulation study has shown that the proposed CIVIC-E 2 system can significantly improve intersection performance under various traffic conditions. Compared with conventional fixed-time and actuated signal control strategies, the proposed algorithm can reduce energy consumption and queue length by up to 31% and 95%, respectively.« less

  13. Cooperative and Integrated Vehicle and Intersection Control for Energy Efficiency (CIVIC-E²)

    DOE PAGES

    Hou, Yunfei; Seliman, Salaheldeen M. S.; Wang, Enshu; ...

    2018-02-15

    Recent advances in connected vehicle technologies enable vehicles and signal controllers to cooperate and improve the traffic management at intersections. This paper explores the opportunity for cooperative and integrated vehicle and intersection control for energy efficiency (CIVIC-E 2) to contribute to a more sustainable transportation system. We propose a two-level approach that jointly optimizes the traffic signal timing and vehicles' approach speed, with the objective being to minimize total energy consumption for all vehicles passing through an isolated intersection. More specifically, at the intersection level, a dynamic programming algorithm is designed to find the optimal signal timing by explicitly consideringmore » the arrival time and energy profile of each vehicle. At the vehicle level, a model predictive control strategy is adopted to ensure that vehicles pass through the intersection in a timely fashion. Our simulation study has shown that the proposed CIVIC-E 2 system can significantly improve intersection performance under various traffic conditions. Compared with conventional fixed-time and actuated signal control strategies, the proposed algorithm can reduce energy consumption and queue length by up to 31% and 95%, respectively.« less

  14. Scoping review of medical assessment units and older people with complex health needs.

    PubMed

    Rushton, Carole; Crilly, Julia; Adeleye, Adeniyi; Grealish, Laurie; Beylacq, Mandy; Forbes, Mark

    2017-03-01

    To explore current knowledge of medical assessment units (MAUs) with specific reference to older people with complex needs and to stimulate new topics and questions for future policy, research and practice. A scoping review was conducted using an integrated-latent thematic approach. This review provides a unique perspective on MAUs and older people which is framed using four themes: efficiency, effectiveness, equity and time. Eighteen articles were reviewed. Most (14) articles reported on efficiency and effectiveness while none reported explicitly on equity. Time was identified as a fourth, latent theme within the literature. Findings from this review indicate that future policy, research and practice relating to MAUs should focus on older people with complex needs, patient-centred metrics and those MAU characteristics most likely to deliver positive health outcomes to this particular cohort of patients. © 2016 AJA Inc.

  15. Making the Grade: Describing Inherent Requirements for the Initial Teacher Education Practicum

    ERIC Educational Resources Information Center

    Sharplin, Elaine; Peden, Sanna; Marais, Ida

    2016-01-01

    This study explores the development, description, and illustration of inherent requirement (IR) statements to make explicit the requirements for performance on an initial teacher education (ITE) practicum. Through consultative group processes with stakeholders involved in ITE, seven IR domains were identified. From interviews with academics,…

  16. Task Models in the Digital Ocean

    ERIC Educational Resources Information Center

    DiCerbo, Kristen E.

    2014-01-01

    The Task Model is a description of each task in a workflow. It defines attributes associated with that task. The creation of task models becomes increasingly important as the assessment tasks become more complex. Explicitly delineating the impact of task variables on the ability to collect evidence and make inferences demands thoughtfulness from…

  17. String theory embeddings of nonrelativistic field theories and their holographic Hořava gravity duals.

    PubMed

    Janiszewski, Stefan; Karch, Andreas

    2013-02-22

    We argue that generic nonrelativistic quantum field theories with a holographic description are dual to Hořava gravity. We construct explicit examples of this duality embedded in string theory by starting with relativistic dual pairs and taking a nonrelativistic scaling limit.

  18. Teacher Role Breadth and its Relationship to Student-Reported Teacher Support

    ERIC Educational Resources Information Center

    Phillippo, Kate L.; Stone, Susan

    2013-01-01

    This study capitalizes on a unique, nested data set comprised of students ("n" = 531) and teachers ("n" = 45) in three high schools that explicitly incorporated student support roles into teachers' job descriptions. Drawing from research on student-teacher relationships, teacher effects on student outcomes, and role theory,…

  19. Education for Sustainable Development: An Exploratory Study in a Portuguese University

    ERIC Educational Resources Information Center

    Torres, Ricardo; Vieira, Rui Marques; Rodrigues, Ana V.; Sá, Patrícia; Moreira, Gillian

    2017-01-01

    Purpose: The research aims to evaluate whether this educational approach is being implemented in a Portuguese public university and looking for explicit references to education for sustainable development (ESD) in the online descriptions of course units (CU). Design/methodology/approach: The research design adopted for this qualitative research…

  20. Discourse Integration Guided by the "Question under Discussion"

    ERIC Educational Resources Information Center

    Clifton, Charles, Jr.; Frazier, Lyn

    2012-01-01

    What makes a discourse coherent? One potential factor has been discussed in the linguistic literature in terms of a Question under Discussion (QUD). This approach claims that discourse proceeds by continually raising explicit or implicit questions, viewed as sets of alternatives, or competing descriptions of the world. If the interlocutor accepts…

  1. Counterfactual Thinking as a Mechanism in Narrative Persuasion

    ERIC Educational Resources Information Center

    Tal-Or, Nurit; Boninger, David S.; Poran, Amir; Gleicher, Faith

    2004-01-01

    Two experiments examined the impact of counterfactual thinking on persuasion. Participants in both experiments were exposed to short video clips in which an actor described a car accident that resulted in serious injury. In the narrative description, the salience of a counterfactual was manipulated by either explicitly including the counterfactual…

  2. The Futility of Attempting to Codify Academic Achievement Standards

    ERIC Educational Resources Information Center

    Sadler, D. Royce

    2014-01-01

    Internationally, attempts at developing explicit descriptions of academic achievement standards have been steadily intensifying. The aim has been to capture the essence of the standards in words, symbols or diagrams (collectively referred to as codifications) so that standards can be: set and maintained at appropriate levels; made broadly…

  3. Non-stoquastic Hamiltonians in quantum annealing via geometric phases

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2017-09-01

    We argue that a complete description of quantum annealing implemented with continuous variables must take into account the non-adiabatic Aharonov-Anandan geometric phase that arises when the system Hamiltonian changes during the anneal. We show that this geometric effect leads to the appearance of non-stoquasticity in the effective quantum Ising Hamiltonians that are typically used to describe quantum annealing with flux qubits. We explicitly demonstrate the effect of this geometric non-stoquasticity when quantum annealing is performed with a system of one and two coupled flux qubits. The realization of non-stoquastic Hamiltonians has important implications from a computational complexity perspective, since it is believed that in many cases quantum annealing with stoquastic Hamiltonians can be efficiently simulated via classical algorithms such as Quantum Monte Carlo. It is well known that the direct implementation of non-stoquastic Hamiltonians with flux qubits is particularly challenging. Our results suggest an alternative path for the implementation of non-stoquasticity via geometric phases that can be exploited for computational purposes.

  4. An ontology of scientific experiments

    PubMed Central

    Soldatova, Larisa N; King, Ross D

    2006-01-01

    The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science. PMID:17015305

  5. A streamlined artificial variable free version of simplex method.

    PubMed

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  6. A Streamlined Artificial Variable Free Version of Simplex Method

    PubMed Central

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement. PMID:25767883

  7. Kinetic description of large-scale low pressure glow discharges

    NASA Astrophysics Data System (ADS)

    Kortshagen, Uwe; Heil, Brian

    1997-10-01

    In recent years the so called ``nonlocal approximation'' to the solution of the electron Boltzmann equation has attracted considerable attention as an extremely efficient method for the kinetic modeling of low pressure discharges. However, it appears that modern discharges, which are optimized to provide large-scale plasma uniformity, are explicitly designed to work in a regime, in which the nonlocal approximation is no longer strictly valid. In the presentation we discuss results of a hybrid model, which is based on the natural division of the electron distribution function into a nonlocal body, which is determined by elastic collisions only, and a high energy part which requires a more complete treatment due to the action of inelastic collisions and wall losses of electrons. The method is applied to an inductively coupled low pressure discharge. We discuss the transition from plasma density profiles maximal on the discharge axis to plasma density profiles with off-center maxima, which has been observed in experiments. A positive feedback mechanism involved in this transition is pointed out.

  8. Simulating the control of molecular reactions via modulated light fields: from gas phase to solution

    NASA Astrophysics Data System (ADS)

    Thallmair, Sebastian; Keefer, Daniel; Rott, Florian; de Vivie-Riedle, Regina

    2017-04-01

    Over the past few years quantum control has proven to be very successful in steering molecular processes. By combining theory with experiment, even highly complex control aims were realized in the gas phase. In this topical review, we illustrate the past achievements on several examples in the molecular context. The next step for the quantum control of chemical processes is to translate the fruitful interplay between theory and experiment to the condensed phase and thus to the regime where chemical synthesis can be supported. On the theory side, increased efforts to include solvent effects in quantum control simulations were made recently. We discuss two major concepts, namely an implicit description of the environment via the density matrix algorithm and an explicit inclusion of solvent molecules. By application to chemical reactions, both concepts conclude that despite environmental perturbations leading to more complex control tasks, efficient quantum control in the condensed phase is still feasible.

  9. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    PubMed

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  10. Optimal protocols for slowly driven quantum systems.

    PubMed

    Zulkowski, Patrick R; DeWeese, Michael R

    2015-09-01

    The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.

  11. On the generalized VIP time integral methodology for transient thermal problems

    NASA Technical Reports Server (NTRS)

    Mei, Youping; Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong

    1993-01-01

    The paper describes the development and applicability of a generalized VIrtual-Pulse (VIP) time integral method of computation for thermal problems. Unlike past approaches for general heat transfer computations, and with the advent of high speed computing technology and the importance of parallel computations for efficient use of computing environments, a major motivation via the developments described in this paper is the need for developing explicit computational procedures with improved accuracy and stability characteristics. As a consequence, a new and effective VIP methodology is described which inherits these improved characteristics. Numerical illustrative examples are provided to demonstrate the developments and validate the results obtained for thermal problems.

  12. Quantization and fractional quantization of currents in periodically driven stochastic systems. I. Average currents

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.

    2012-04-01

    This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.

  13. Transition probabilities for general birth-death processes with applications in ecology, genetics, and evolution

    PubMed Central

    Crawford, Forrest W.; Suchard, Marc A.

    2011-01-01

    A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359

  14. Efficient and stable exponential time differencing Runge-Kutta methods for phase field elastic bending energy models

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqiang; Ju, Lili; Du, Qiang

    2016-07-01

    The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.

  15. The `What is a system' reflection interview as a knowledge integration activity for high school students' understanding of complex systems in human biology

    NASA Astrophysics Data System (ADS)

    Tripto, Jaklin; Ben-Zvi Assaraf, Orit; Snapir, Zohar; Amit, Miriam

    2016-03-01

    This study examined the reflection interview as a tool for assessing and facilitating the use of 'systems language' amongst 11th grade students who have recently completed their first year of high school biology. Eighty-three students composed two concept maps in the 10th grade-one at the beginning of the school year and one at its end. The first part of the interview is dedicated to guiding the students through comparing their two concept maps and by means of both explicit and non-explicit teaching. Our study showed that the explicit guidance in comparing the two concept maps was more effective than the non-explicit, eliciting a variety of different, more specific, types of interactions and patterns (e.g. 'hierarchy', 'dynamism', 'homeostasis') in the students' descriptions of the human body system. The reflection interview as a knowledge integration activity was found to be an effective tool for assessing the subjects' conceptual models of 'system complexity', and for identifying those aspects of a system that are most commonly misunderstood.

  16. A methodology to migrate the gene ontology to a description logic environment using DAML+OIL.

    PubMed

    Wroe, C J; Stevens, R; Goble, C A; Ashburner, M

    2003-01-01

    The Gene Ontology Next Generation Project (GONG) is developing a staged methodology to evolve the current representation of the Gene Ontology into DAML+OIL in order to take advantage of the richer formal expressiveness and the reasoning capabilities of the underlying description logic. Each stage provides a step level increase in formal explicit semantic content with a view to supporting validation, extension and multiple classification of the Gene Ontology. The paper introduces DAML+OIL and demonstrates the activity within each stage of the methodology and the functionality gained.

  17. Terror management theory and self-esteem revisited: the roles of implicit and explicit self-esteem in mortality salience effects.

    PubMed

    Schmeichel, Brandon J; Gailliot, Matthew T; Filardo, Emily-Ana; McGregor, Ian; Gitter, Seth; Baumeister, Roy F

    2009-05-01

    Three studies tested the roles of implicit and/or explicit self-esteem in reactions to mortality salience. In Study 1, writing about death versus a control topic increased worldview defense among participants low in implicit self-esteem but not among those high in implicit self-esteem. In Study 2, a manipulation to boost implicit self-esteem reduced the effect of mortality salience on worldview defense. In Study 3, mortality salience increased the endorsement of positive personality descriptions but only among participants with the combination of low implicit and high explicit self-esteem. These findings indicate that high implicit self-esteem confers resilience against the psychological threat of death, and therefore the findings provide direct support for a fundamental tenet of terror management theory regarding the anxiety-buffering role of self-esteem. Copyright (c) 2009 APA, all rights reserved.

  18. A Technique for Showing Causal Arguments in Accident Reports

    NASA Technical Reports Server (NTRS)

    Holloway, C. M.; Johnson, C. W.

    2005-01-01

    In the prototypical accident report, specific findings, particularly those related to causes and contributing factors, are usually written out explicitly and clearly. Also, the evidence upon which these findings are based is typically explained in detail. Often lacking, however, is any explicit discussion, description, or depiction of the arguments that connect the findings and the evidence. That is, the reports do not make clear why the investigators believe that the specific evidence they found necessarily leads to the particular findings they enumerated. This paper shows how graphical techniques can be used to depict relevant arguments supporting alternate positions on the causes of a complex road-traffic accident.

  19. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  20. Efficient method to design RF pulses for parallel excitation MRI using gridding and conjugate gradient

    PubMed Central

    Feng, Shuo

    2014-01-01

    Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420

  1. Efficient method to design RF pulses for parallel excitation MRI using gridding and conjugate gradient.

    PubMed

    Feng, Shuo; Ji, Jim

    2014-04-01

    Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.

  2. Application of an efficient hybrid scheme for aeroelastic analysis of advanced propellers

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Sankar, N. L.; Reddy, T. S. R.; Huff, D. L.

    1989-01-01

    An efficient 3-D hybrid scheme is applied for solving Euler equations to analyze advanced propellers. The scheme treats the spanwise direction semi-explicitly and the other two directions implicitly, without affecting the accuracy, as compared to a fully implicit scheme. This leads to a reduction in computer time and memory requirement. The calculated power coefficients for two advanced propellers, SR3 and SR7L, and various advanced ratios showed good correlation with experiment. Spanwise distribution of elemental power coefficient and steady pressure coefficient differences also showed good agreement with experiment. A study of the effect of structural flexibility on the performance of the advanced propellers showed that structural deformation due to centrifugal and aero loading should be included for better correlation.

  3. CERENA: ChEmical REaction Network Analyzer--A Toolbox for the Simulation and Analysis of Stochastic Chemical Kinetics.

    PubMed

    Kazeroonian, Atefeh; Fröhlich, Fabian; Raue, Andreas; Theis, Fabian J; Hasenauer, Jan

    2016-01-01

    Gene expression, signal transduction and many other cellular processes are subject to stochastic fluctuations. The analysis of these stochastic chemical kinetics is important for understanding cell-to-cell variability and its functional implications, but it is also challenging. A multitude of exact and approximate descriptions of stochastic chemical kinetics have been developed, however, tools to automatically generate the descriptions and compare their accuracy and computational efficiency are missing. In this manuscript we introduced CERENA, a toolbox for the analysis of stochastic chemical kinetics using Approximations of the Chemical Master Equation solution statistics. CERENA implements stochastic simulation algorithms and the finite state projection for microscopic descriptions of processes, the system size expansion and moment equations for meso- and macroscopic descriptions, as well as the novel conditional moment equations for a hybrid description. This unique collection of descriptions in a single toolbox facilitates the selection of appropriate modeling approaches. Unlike other software packages, the implementation of CERENA is completely general and allows, e.g., for time-dependent propensities and non-mass action kinetics. By providing SBML import, symbolic model generation and simulation using MEX-files, CERENA is user-friendly and computationally efficient. The availability of forward and adjoint sensitivity analyses allows for further studies such as parameter estimation and uncertainty analysis. The MATLAB code implementing CERENA is freely available from http://cerenadevelopers.github.io/CERENA/.

  4. CERENA: ChEmical REaction Network Analyzer—A Toolbox for the Simulation and Analysis of Stochastic Chemical Kinetics

    PubMed Central

    Kazeroonian, Atefeh; Fröhlich, Fabian; Raue, Andreas; Theis, Fabian J.; Hasenauer, Jan

    2016-01-01

    Gene expression, signal transduction and many other cellular processes are subject to stochastic fluctuations. The analysis of these stochastic chemical kinetics is important for understanding cell-to-cell variability and its functional implications, but it is also challenging. A multitude of exact and approximate descriptions of stochastic chemical kinetics have been developed, however, tools to automatically generate the descriptions and compare their accuracy and computational efficiency are missing. In this manuscript we introduced CERENA, a toolbox for the analysis of stochastic chemical kinetics using Approximations of the Chemical Master Equation solution statistics. CERENA implements stochastic simulation algorithms and the finite state projection for microscopic descriptions of processes, the system size expansion and moment equations for meso- and macroscopic descriptions, as well as the novel conditional moment equations for a hybrid description. This unique collection of descriptions in a single toolbox facilitates the selection of appropriate modeling approaches. Unlike other software packages, the implementation of CERENA is completely general and allows, e.g., for time-dependent propensities and non-mass action kinetics. By providing SBML import, symbolic model generation and simulation using MEX-files, CERENA is user-friendly and computationally efficient. The availability of forward and adjoint sensitivity analyses allows for further studies such as parameter estimation and uncertainty analysis. The MATLAB code implementing CERENA is freely available from http://cerenadevelopers.github.io/CERENA/. PMID:26807911

  5. The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.

  6. Rigorous RG Algorithms and Area Laws for Low Energy Eigenstates in 1D

    NASA Astrophysics Data System (ADS)

    Arad, Itai; Landau, Zeph; Vazirani, Umesh; Vidick, Thomas

    2017-11-01

    One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance (Landau et al. in Nat Phys, 2015) gave a polynomial time algorithm to compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unsolved, including whether there exist efficient algorithms when the ground space is degenerate (and of polynomial dimension in the system size), or for the polynomially many lowest energy states, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm, based on a rigorously justified RG type transformation, for finding low energy states for 1D Hamiltonians acting on a chain of n particles. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly( n) degenerate ground spaces and an n O(log n) algorithm for the poly( n) lowest energy states (under a mild density condition). For these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is {\\tilde{O}(nM(n))} , where M( n) is the time required to multiply two n × n matrices.

  7. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Lin, Lin; Shao, Meiyue

    We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less

  8. A qualitative analysis of health professionals' job descriptions for surgical service delivery in Uganda.

    PubMed

    Buwembo, William; Munabi, Ian G; Galukande, Moses; Kituuka, Olivia; Luboga, Samuel A

    2014-01-01

    The ever increasing demand for surgical services in sub-Saharan Africa is creating a need to increase the number of health workers able to provide surgical care. This calls for the optimisation of all available human resources to provide universal access to essential and emergency surgical services. One way of optimising already scarce human resources for health is by clarifying job descriptions to guide the scope of practice, measuring rewards/benefits for the health workers providing surgical care, and informing education and training for health professionals. This study set out to determine the scope of the mandate to perform surgical procedures in current job descriptions of surgical care health professionals in Uganda. A document review was conducted of job descriptions for the health professionals responsible for surgical service delivery in the Ugandan Health care system. The job descriptions were extracted and subjected to a qualitative content data analysis approach using a text based RQDA package of the open source R statistical computing software. It was observed that there was no explicit mention of assignment of delivery of surgical services to a particular cadre. Instead the bulk of direct patient related care, including surgical attention, was assigned to the lower cadres, in particular the medical officer. Senior cadres were assigned to perform predominantly advisory and managerial roles in the health care system. In addition, a no cost opportunity to task shift surgical service delivery to the senior clinical officers was identified. There is a need to specifically assign the mandate to provide surgical care tasks, according to degree of complexity, to adequately trained cadres of health workers. Health professionals' current job descriptions are not explicit, and therefore do not adequately support proper training, deployment, defined scope of practice, and remuneration for equitable surgical service delivery in Uganda. Such deliberate assignment of mandates will provide a means of increasing surgical service delivery through further optimisation of the available human resources for health.

  9. Are mixed explicit/implicit solvation models reliable for studying phosphate hydrolysis? A comparative study of continuum, explicit and mixed solvation models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamerlin, Shina C. L.; Haranczyk, Maciej; Warshel, Arieh

    2009-05-01

    Phosphate hydrolysis is ubiquitous in biology. However, despite intensive research on this class of reactions, the precise nature of the reaction mechanism remains controversial. In this work, we have examined the hydrolysis of three homologous phosphate diesters. The solvation free energy was simulated by means of either an implicit solvation model (COSMO), hybrid quantum mechanical / molecular mechanical free energy perturbation (QM/MM-FEP) or a mixed solvation model in which N water molecules were explicitly included in the ab initio description of the reacting system (where N=1-3), with the remainder of the solvent being implicitly modelled as a continuum. Here, bothmore » COSMO and QM/MM-FEP reproduce Delta Gobs within an error of about 2kcal/mol. However, we demonstrate that in order to obtain any form of reliable results from a mixed model, it is essential to carefully select the explicit water molecules from short QM/MM runs that act as a model for the true infinite system. Additionally, the mixed models tend to be increasingly inaccurate the more explicit water molecules are placed into the system. Thus, our analysis indicates that this approach provides an unreliable way for modelling phosphate hydrolysis in solution.« less

  10. Representing Medical Knowledge in the Form of Structured Text: The Development of Current Disease Descriptions*

    PubMed Central

    Nelson, Stuart J.; Sherertz, David D.; Erlbaum, Mark S.; Tuttle, Mark S.

    1989-01-01

    As part of the Unified Medical Language System (UMLS) initiative, some 900 diseases have been described using “structured text.” Structured text is words and short phrases entered under labelled contexts. Vocabulary is not controlled. The contexts comprise a template for the disease description. The structured text is both manipulable by machine and readable by humans. Use of the template was natural, and only a few problems arose in using the template. Instructions to disease description composers must be explicit in definitions of the contexts. Diseases to be described are chosen, after clustering related diseases, according to the distinctions that physicians practicing in the area under question believe are important. Limiting disease descriptions to primitive observations and to entities otherwise described within the corpus appears to be both feasible and desirable.

  11. Dependence of Hurricane intensity and structures on vertical resolution and time-step size

    NASA Astrophysics Data System (ADS)

    Zhang, Da-Lin; Wang, Xiaoxue

    2003-09-01

    In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.

  12. BSIFT: toward data-independent codebook for large scale image search.

    PubMed

    Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi

    2015-03-01

    Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.

  13. Integrating spatially explicit representations of landscape perceptions into land change research

    USGS Publications Warehouse

    Dorning, Monica; Van Berkel, Derek B.; Semmens, Darius J.

    2017-01-01

    Purpose of ReviewHuman perceptions of the landscape can influence land-use and land-management decisions. Recognizing the diversity of landscape perceptions across space and time is essential to understanding land change processes and emergent landscape patterns. We summarize the role of landscape perceptions in the land change process, demonstrate advances in quantifying and mapping landscape perceptions, and describe how these spatially explicit techniques have and may benefit land change research.Recent FindingsMapping landscape perceptions is becoming increasingly common, particularly in research focused on quantifying ecosystem services provision. Spatial representations of landscape perceptions, often measured in terms of landscape values and functions, provide an avenue for matching social and environmental data in land change studies. Integrating these data can provide new insights into land change processes, contribute to landscape planning strategies, and guide the design and implementation of land change models.SummaryChallenges remain in creating spatial representations of human perceptions. Maps must be accompanied by descriptions of whose perceptions are being represented and the validity and uncertainty of those representations across space. With these considerations, rapid advancements in mapping landscape perceptions hold great promise for improving representation of human dimensions in landscape ecology and land change research.

  14. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE PAGES

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    2018-05-01

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  15. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  16. A family of compact high order coupled time-space unconditionally stable vertical advection schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, Florian; Debreu, Laurent

    2016-04-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.

  17. Individual Differences at High Perceptual Load: The Relation between Trait Anxiety and Selective Attention

    PubMed Central

    Sadeh, Naomi; Bredemeier, Keith

    2010-01-01

    Attentional control theory (Eysenck et al., 2007) posits that taxing attentional resources impairs performance efficiency in anxious individuals. This theory, however, does not explicitly address if or how the relation between anxiety and attentional control depends upon the perceptual demands of the task at hand. Consequently, the present study examined the relation between trait anxiety and task performance using a perceptual load task (Maylor & Lavie, 1998). Sixty-eight male college students completed a visual search task that indexed processing of irrelevant distractors systematically across four levels of perceptual load. Results indicated that anxiety was related to difficulty suppressing the behavioral effects of irrelevant distractors (i.e., decreased reaction time efficiency) under high, but not low, perceptual loads. In contrast, anxiety was not associated with error rates on the task. These findings are consistent with the prediction that anxiety is associated with impairments in performance efficiency under conditions that tax attentional resources. PMID:21547776

  18. Individual differences at high perceptual load: the relation between trait anxiety and selective attention.

    PubMed

    Sadeh, Naomi; Bredemeier, Keith

    2011-06-01

    Attentional control theory (Eysenck et al., 2007) posits that taxing attentional resources impairs performance efficiency in anxious individuals. This theory, however, does not explicitly address if or how the relation between anxiety and attentional control depends upon the perceptual demands of the task at hand. Consequently, the present study examined the relation between trait anxiety and task performance using a perceptual load task (Maylor & Lavie, 1998). Sixty-eight male college students completed a visual search task that indexed processing of irrelevant distractors systematically across four levels of perceptual load. Results indicated that anxiety was related to difficulty suppressing the behavioural effects of irrelevant distractors (i.e., decreased reaction time efficiency) under high, but not low, perceptual loads. In contrast, anxiety was not associated with error rates on the task. These findings are consistent with the prediction that anxiety is associated with impairments in performance efficiency under conditions that tax attentional resources.

  19. Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds

    NASA Astrophysics Data System (ADS)

    Mitra, Arpita

    2017-12-01

    The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.

  20. Physical retrieval of precipitation water contents from Special Sensor Microwave/Imager (SSM/I) data. Part 1: A cloud ensemble/radiative parameterization for sensor response (report version)

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Raymond, William H.

    1990-01-01

    The physical retrieval of geophysical parameters based upon remotely sensed data requires a sensor response model which relates the upwelling radiances that the sensor observes to the parameters to be retrieved. In the retrieval of precipitation water contents from satellite passive microwave observations, the sensor response model has two basic components. First, a description of the radiative transfer of microwaves through a precipitating atmosphere must be considered, because it is necessary to establish the physical relationship between precipitation water content and upwelling microwave brightness temperature. Also the spatial response of the satellite microwave sensor (or antenna pattern) must be included in the description of sensor response, since precipitation and the associated brightness temperature field can vary over a typical microwave sensor resolution footprint. A 'population' of convective cells, as well as stratiform clouds, are simulated using a computationally-efficient multi-cylinder cloud model. Ensembles of clouds selected at random from the population, distributed over a 25 km x 25 km model domain, serve as the basis for radiative transfer calculations of upwelling brightness temperatures at the SSM/I frequencies. Sensor spatial response is treated explicitly by convolving the upwelling brightness temperature by the domain-integrated SSM/I antenna patterns. The sensor response model is utilized in precipitation water content retrievals.

  1. Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations

    DOE PAGES

    Radak, Brian K.; Roux, Benoît

    2016-10-07

    Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less

  2. Unified cosmic history in modified gravity: From F(R) theory to Lorentz non-invariant models

    NASA Astrophysics Data System (ADS)

    Nojiri, Shin'Ichi; Odintsov, Sergei D.

    2011-08-01

    The classical generalization of general relativity is considered as the gravitational alternative for a unified description of the early-time inflation with late-time cosmic acceleration. The structure and cosmological properties of a number of modified theories, including traditional F(R) and Hořava-Lifshitz F(R) gravity, scalar-tensor theory, string-inspired and Gauss-Bonnet theory, non-local gravity, non-minimally coupled models, and power-counting renormalizable covariant gravity are discussed. Different representations of and relations between such theories are investigated. It is shown that some versions of the above theories may be consistent with local tests and may provide a qualitatively reasonable unified description of inflation with the dark energy epoch. The cosmological reconstruction of different modified gravities is provided in great detail. It is demonstrated that eventually any given universe evolution may be reconstructed for the theories under consideration, and the explicit reconstruction is applied to an accelerating spatially flat Friedmann-Robertson-Walker (FRW) universe. Special attention is paid to Lagrange multiplier constrained and conventional F(R) gravities, for latter F(R) theory, the effective ΛCDM era and phantom divide crossing acceleration are obtained. The occurrences of the Big Rip and other finite-time future singularities in modified gravity are reviewed along with their solutions via the addition of higher-derivative gravitational invariants.

  3. Multidimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2015-09-01

    We discuss a new, conservative, fully implicit 2D-3V particle-in-cell algorithm for non-radiative, electromagnetic kinetic plasma simulations, based on the Vlasov-Darwin model. Unlike earlier linearly implicit PIC schemes and standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. This has been demonstrated in 1D electrostatic and electromagnetic contexts. In this study, we build on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the Darwin field and particle orbit equations for multiple species in multiple dimensions. The Vlasov-Darwin model is very attractive for PIC simulations because it avoids radiative noise issues in non-radiative electromagnetic regimes. The algorithm conserves global energy, local charge, and particle canonical-momentum exactly, even with grid packing. The nonlinear iteration is effectively accelerated with a fluid preconditioner, which allows efficient use of large timesteps, O(√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D and 2D. Support from the LANL LDRD program and the DOE-SC ASCR office.

  4. Spatially-explicit life cycle assessment of sun-to-wheels transportation pathways in the U.S.

    PubMed

    Geyer, Roland; Stoms, David; Kallaos, James

    2013-01-15

    Growth in biofuel production, which is meant to reduce greenhouse gas (GHG) emissions and fossil energy demand, is increasingly seen as a threat to food supply and natural habitats. Using photovoltaics (PV) to directly convert solar radiation into electricity for battery electric vehicles (BEVs) is an alternative to photosynthesis, which suffers from a very low energy conversion efficiency. Assessments need to be spatially explicit, since solar insolation and crop yields vary widely between locations. This paper therefore compares direct land use, life cycle GHG emissions and fossil fuel requirements of five different sun-to-wheels conversion pathways for every county in the contiguous U.S.: Ethanol from corn or switchgrass for internal combustion vehicles (ICVs), electricity from corn or switchgrass for BEVs, and PV electricity for BEVs. Even the most land-use efficient biomass-based pathway (i.e., switchgrass bioelectricity in U.S. counties with hypothetical crop yields of over 24 tonnes/ha) requires 29 times more land than the PV-based alternative in the same locations. PV BEV systems also have the lowest life cycle GHG emissions throughout the U.S. and the lowest fossil fuel inputs, except for locations with hypothetical switchgrass yields of 16 or more tonnes/ha. Including indirect land use effects further strengthens the case for PV.

  5. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1995-01-01

    When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.

  6. Matrix models for the black hole information paradox

    NASA Astrophysics Data System (ADS)

    Iizuka, Norihiro; Okuda, Takuya; Polchinski, Joseph

    2010-02-01

    We study various matrix models with a charge-charge interaction as toy models of the gauge dual of the AdS black hole. These models show a continuous spectrum and power-law decay of correlators at late time and infinite N, implying information loss in this limit. At finite N, the spectrum is discrete and correlators have recurrences, so there is no information loss. We study these models by a variety of techniques, such as Feynman graph expansion, loop equations, and sum over Young tableaux, and we obtain explicitly the leading 1/ N 2 corrections for the spectrum and correlators. These techniques are suggestive of possible dual bulk descriptions. At fixed order in 1/ N 2 the spectrum remains continuous and no recurrence occurs, so information loss persists. However, the interchange of the long-time and large- N limits is subtle and requires further study.

  7. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  8. Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics

    NASA Astrophysics Data System (ADS)

    d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.

    2018-05-01

    Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.

  9. The Effects of Normative and Situational Consensus Information on Causal Attributions for Prosocial and Antisocial Behaviors.

    ERIC Educational Resources Information Center

    Mower, Judith C.

    The interactive effects of implicit normative and explicit situational consensus information were examined regarding the processes of causal attribution and evaluation. Stimulus items were single sentence descriptions of antisocial and prosocial behaviors representing the extremes of high and low normative consensus in each behavior category, as…

  10. The Moral Vacuum in Teacher Education Research and Practice

    ERIC Educational Resources Information Center

    Sanger, Matthew; Osguthorpe, Richard

    2013-01-01

    This chapter examines the gap between the widespread acknowledgment that teaching is a moral endeavor, on the one hand, and the lack of explicit, systematic teacher education research and practice to support preparing teachers for the moral aspects of teaching. After providing an initial description of the aforementioned gap, the chapter surveys…

  11. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    ERIC Educational Resources Information Center

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  12. Mental Layouts of Concealed Objects as a Function of Bizarre Imagery and Retention Interval.

    ERIC Educational Resources Information Center

    Iaccino, James; And Others

    To determine whether concealed imagery was an effective mnemonic aid in the recall of paired objects, two studies were conducted with explicitly worded instructions to conceal targets and with variable image formation periods. In each study, 40 subjects were presented with counterbalanced verbal descriptions of Concealed, Pictorial, and Separate…

  13. The Emergence of Objects from Mathematical Practices

    ERIC Educational Resources Information Center

    Font, Vicenc; Godino, Juan D.; Gallardo, Jesus

    2013-01-01

    The nature of mathematical objects, their various types, the way in which they are formed, and how they participate in mathematical activity are all questions of interest for philosophy and mathematics education. Teaching in schools is usually based, implicitly or explicitly, on a descriptive/realist view of mathematics, an approach which is not…

  14. An Analysis of the Cape Verdean Status Quo: Outgrowths of a Critical Environment.

    ERIC Educational Resources Information Center

    Brown, Christopher

    Utilizing an anthropological approach, this paper provides an intense and unified description of the dominant geographic, economic, political, historic, and social trends prevalent in Cape Verde. It serves as a quasi-explicit and exceptionally objective emphasis of the island's background, and the outgrowths evident in the status quo. The…

  15. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  16. Bessel smoothing filter for spectral-element mesh

    NASA Astrophysics Data System (ADS)

    Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.

    2017-06-01

    Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the efficiency and flexibility of the approach proposed.

  17. Stochastic effects in a thermochemical system with Newtonian heat exchange.

    PubMed

    Nowakowski, B; Lemarchand, A

    2001-12-01

    We develop a mesoscopic description of stochastic effects in the Newtonian heat exchange between a diluted gas system and a thermostat. We explicitly study the homogeneous Semenov model involving a thermochemical reaction and neglecting consumption of reactants. The master equation includes a transition rate for the thermal transfer process, which is derived on the basis of the statistics for inelastic collisions between gas particles and walls of the thermostat. The main assumption is that the perturbation of the Maxwellian particle velocity distribution can be neglected. The transition function for the thermal process admits a continuous spectrum of temperature changes, and consequently, the master equation has a complicated integro-differential form. We perform Monte Carlo simulations based on this equation to study the stochastic effects in the Semenov system in the explosive regime. The dispersion of ignition times is calculated as a function of system size. For sufficiently small systems, the probability distribution of temperature displays transient bimodality during the ignition period. The results of the stochastic description are successfully compared with those of direct simulations of microscopic particle dynamics.

  18. Defects at grain boundaries: A coarse-grained, three-dimensional description by the amplitude expansion of the phase-field crystal model

    NASA Astrophysics Data System (ADS)

    Salvalaglio, Marco; Backofen, Rainer; Elder, K. R.; Voigt, Axel

    2018-05-01

    We address a three-dimensional, coarse-grained description of dislocation networks at grain boundaries between rotated crystals. The so-called amplitude expansion of the phase-field crystal model is exploited with the aid of finite element method calculations. This approach allows for the description of microscopic features, such as dislocations, while simultaneously being able to describe length scales that are orders of magnitude larger than the lattice spacing. Moreover, it allows for the direct description of extended defects by means of a scalar order parameter. The versatility of this framework is shown by considering both fcc and bcc lattice symmetries and different rotation axes. First, the specific case of planar, twist grain boundaries is illustrated. The details of the method are reported and the consistency of the results with literature is discussed. Then, the dislocation networks forming at the interface between a spherical, rotated crystal embedded in an unrotated crystalline structure, are shown. Although explicitly accounting for dislocations which lead to an anisotropic shrinkage of the rotated grain, the extension of the spherical grain boundary is found to decrease linearly over time in agreement with the classical theory of grain growth and recent atomistic investigations. It is shown that the results obtained for a system with bcc symmetry agree very well with existing results, validating the methodology. Furthermore, fully original results are shown for fcc lattice symmetry, revealing the generality of the reported observations.

  19. On the complementary relationship between marginal nitrogen and water-use efficiencies among Pinus taeda leaves grown under ambient and CO2-enriched environments

    PubMed Central

    Palmroth, Sari; Katul, Gabriel G.; Maier, Chris A.; Ward, Eric; Manzoni, Stefano; Vico, Giulia

    2013-01-01

    Background and Aims Water and nitrogen (N) are two limiting resources for biomass production of terrestrial vegetation. Water losses in transpiration (E) can be decreased by reducing leaf stomatal conductance (gs) at the expense of lowering CO2 uptake (A), resulting in increased water-use efficiency. However, with more N available, higher allocation of N to photosynthetic proteins improves A so that N-use efficiency is reduced when gs declines. Hence, a trade-off is expected between these two resource-use efficiencies. In this study it is hypothesized that when foliar concentration (N) varies on time scales much longer than gs, an explicit complementary relationship between the marginal water- and N-use efficiency emerges. Furthermore, a shift in this relationship is anticipated with increasing atmospheric CO2 concentration (ca). Methods Optimization theory is employed to quantify interactions between resource-use efficiencies under elevated ca and soil N amendments. The analyses are based on marginal water- and N-use efficiencies, λ = (∂A/∂gs)/(∂E/∂gs) and η = ∂A/∂N, respectively. The relationship between the two efficiencies and related variation in intercellular CO2 concentration (ci) were examined using A/ci curves and foliar N measured on Pinus taeda needles collected at various canopy locations at the Duke Forest Free Air CO2 Enrichment experiment (North Carolina, USA). Key Results Optimality theory allowed the definition of a novel, explicit relationship between two intrinsic leaf-scale properties where η is complementary to the square-root of λ. The data support the model predictions that elevated ca increased η and λ, and at given ca and needle age-class, the two quantities varied among needles in an approximately complementary manner. Conclusions The derived analytical expressions can be employed in scaling-up carbon, water and N fluxes from leaf to ecosystem, but also to derive transpiration estimates from those of η, and assist in predicting how increasing ca influences ecosystem water use. PMID:23299995

  20. Kinetics of binary nucleation of vapors in size and composition space.

    PubMed

    Fisenko, Sergey P; Wilemski, Gerald

    2004-11-01

    We reformulate the kinetic description of binary nucleation in the gas phase using two natural independent variables: the total number of molecules g and the molar composition x of the cluster. The resulting kinetic equation can be viewed as a two-dimensional Fokker-Planck equation describing the simultaneous Brownian motion of the clusters in size and composition space. Explicit expressions for the Brownian diffusion coefficients in cluster size and composition space are obtained. For characterization of binary nucleation in gases three criteria are established. These criteria establish the relative importance of the rate processes in cluster size and composition space for different gas phase conditions and types of liquid mixtures. The equilibrium distribution function of the clusters is determined in terms of the variables g and x. We obtain an approximate analytical solution for the steady-state binary nucleation rate that has the correct limit in the transition to unary nucleation. To further illustrate our description, the nonequilibrium steady-state cluster concentrations are found by numerically solving the reformulated kinetic equation. For the reformulated transient problem, the relaxation or induction time for binary nucleation was calculated using Galerkin's method. This relaxation time is affected by processes in both size and composition space, but the contributions from each process can be separated only approximately.

  1. [Analysis of cost and efficiency of a medical nursing unit using time-driven activity-based costing].

    PubMed

    Lim, Ji Young; Kim, Mi Ja; Park, Chang Gi

    2011-08-01

    Time-driven activity-based costing was applied to analyze the nursing activity cost and efficiency of a medical unit. Data were collected at a medical unit of a general hospital. Nursing activities were measured using a nursing activities inventory and classified as 6 domains using Easley-Storfjell Instrument. Descriptive statistics were used to identify general characteristics of the unit, nursing activities and activity time, and stochastic frontier model was adopted to estimate true activity time. The average efficiency of the medical unit using theoretical resource capacity was 77%, however the efficiency using practical resource capacity was 96%. According to these results, the portion of non-added value time was estimated 23% and 4% each. The sums of total nursing activity costs were estimated 109,860,977 won in traditional activity-based costing and 84,427,126 won in time-driven activity-based costing. The difference in the two cost calculating methods was 25,433,851 won. These results indicate that the time-driven activity-based costing provides useful and more realistic information about the efficiency of unit operation compared to traditional activity-based costing. So time-driven activity-based costing is recommended as a performance evaluation framework for nursing departments based on cost management.

  2. Multifocus microscopy with precise color multi-phase diffractive optics applied in functional neuronal imaging.

    PubMed

    Abrahamsson, Sara; Ilic, Rob; Wisniewski, Jan; Mehl, Brian; Yu, Liya; Chen, Lei; Davanco, Marcelo; Oudjedi, Laura; Fiche, Jean-Bernard; Hajj, Bassam; Jin, Xin; Pulupa, Joan; Cho, Christine; Mir, Mustafa; El Beheiry, Mohamed; Darzacq, Xavier; Nollmann, Marcelo; Dahan, Maxime; Wu, Carl; Lionnet, Timothée; Liddle, J Alexander; Bargmann, Cornelia I

    2016-03-01

    Multifocus microscopy (MFM) allows high-resolution instantaneous three-dimensional (3D) imaging and has been applied to study biological specimens ranging from single molecules inside cells nuclei to entire embryos. We here describe pattern designs and nanofabrication methods for diffractive optics that optimize the light-efficiency of the central optical component of MFM: the diffractive multifocus grating (MFG). We also implement a "precise color" MFM layout with MFGs tailored to individual fluorophores in separate optical arms. The reported advancements enable faster and brighter volumetric time-lapse imaging of biological samples. In live microscopy applications, photon budget is a critical parameter and light-efficiency must be optimized to obtain the fastest possible frame rate while minimizing photodamage. We provide comprehensive descriptions and code for designing diffractive optical devices, and a detailed methods description for nanofabrication of devices. Theoretical efficiencies of reported designs is ≈90% and we have obtained efficiencies of > 80% in MFGs of our own manufacture. We demonstrate the performance of a multi-phase MFG in 3D functional neuronal imaging in living C. elegans.

  3. Nurses' reported thinking during medication administration.

    PubMed

    Eisenhauer, Laurel A; Hurley, Ann C; Dolan, Nancy

    2007-01-01

    To document nurses' reported thinking processes during medication administration before and after implementation of point-of-care technology. Semistructured interviews and real-time tape recordings were used to document the thinking processes of 40 nurses practicing in inpatient care units in a large tertiary care teaching hospital in the northeastern US. Content analysis resulted in identification of 10 descriptive categories of nurses' thinking: communication, dose-time, checking, assessment, evaluation, teaching, side effects, work arounds, anticipating problem solving, and drug administration. Situations requiring judgment in dosage, timing, or selection of specific medications (e.g., pain management, titration of antihypertensives) provided the most explicit data about nurses' use of critical thinking and clinical judgment. A key element was nurses' constant professional vigilance to ensure that patients received their appropriate medications. Nurses' thinking processes extended beyond rules and procedures and were based on patient data and interdisciplinary professional knowledge to provide safe and effective care. Identification of thinking processes can help nurses to explain the professional expertise inherent in medication administration beyond the technical application of the "5 rights."

  4. Implicit and explicit motor sequence learning in children born very preterm.

    PubMed

    Jongbloed-Pereboom, Marjolein; Janssen, Anjo J W M; Steiner, K; Steenbergen, Bert; Nijhuis-van der Sanden, Maria W G

    2017-01-01

    Motor skills can be learned explicitly (dependent on working memory (WM)) or implicitly (relatively independent of WM). Children born very preterm (VPT) often have working memory deficits. Explicit learning may be compromised in these children. This study investigated implicit and explicit motor learning and the role of working memory in VPT children and controls. Three groups (6-9 years) participated: 20 VPT children with motor problems, 20 VPT children without motor problems, and 20 controls. A nine button sequence was learned implicitly (pressing the lighted button as quickly as possible) and explicitly (discovering the sequence via trial-and-error). Children learned implicitly and explicitly, evidenced by decreased movement duration of the sequence over time. In the explicit condition, children also reduced the number of errors over time. Controls made more errors than VPT children without motor problems. Visual WM had positive effects on both explicit and implicit performance. VPT birth and low motor proficiency did not negatively affect implicit or explicit learning. Visual WM was positively related to both implicit and explicit performance, but did not influence learning curves. These findings question the theoretical difference between implicit and explicit learning and the proposed role of visual WM therein. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Neurocognitive mechanisms underlying the experience of flow.

    PubMed

    Dietrich, Arne

    2004-12-01

    Recent theoretical and empirical work in cognitive science and neuroscience is brought into contact with the concept of the flow experience. After a brief exposition of brain function, the explicit-implicit distinction is applied to the effortless information processing that is so characteristic of the flow state. The explicit system is associated with the higher cognitive functions of the frontal lobe and medial temporal lobe structures and has evolved to increase cognitive flexibility. In contrast, the implicit system is associated with the skill-based knowledge supported primarily by the basal ganglia and has the advantage of being more efficient. From the analysis of this flexibility/efficiency trade-off emerges a thesis that identifies the flow state as a period during which a highly practiced skill that is represented in the implicit system's knowledge base is implemented without interference from the explicit system. It is proposed that a necessary prerequisite to the experience of flow is a state of transient hypofrontality that enables the temporary suppression of the analytical and meta-conscious capacities of the explicit system. Examining sensory-motor integration skills that seem to typify flow such as athletic performance, writing, and free-jazz improvisation, the new framework clarifies how this concept relates to creativity and opens new avenues of research.

  6. Underdamped scaled Brownian motion: (non-)existence of the overdamped limit in anomalous diffusion

    PubMed Central

    Bodrova, Anna S.; Chechkin, Aleksei V.; Cherstvy, Andrey G.; Safdari, Hadiseh; Sokolov, Igor M.; Metzler, Ralf

    2016-01-01

    It is quite generally assumed that the overdamped Langevin equation provides a quantitative description of the dynamics of a classical Brownian particle in the long time limit. We establish and investigate a paradigm anomalous diffusion process governed by an underdamped Langevin equation with an explicit time dependence of the system temperature and thus the diffusion and damping coefficients. We show that for this underdamped scaled Brownian motion (UDSBM) the overdamped limit fails to describe the long time behaviour of the system and may practically even not exist at all for a certain range of the parameter values. Thus persistent inertial effects play a non-negligible role even at significantly long times. From this study a general questions on the applicability of the overdamped limit to describe the long time motion of an anomalously diffusing particle arises, with profound consequences for the relevance of overdamped anomalous diffusion models. We elucidate our results in view of analytical and simulations results for the anomalous diffusion of particles in free cooling granular gases. PMID:27462008

  7. Time Series Expression Analyses Using RNA-seq: A Statistical Approach

    PubMed Central

    Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P.

    2013-01-01

    RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis. PMID:23586021

  8. Time series expression analyses using RNA-seq: a statistical approach.

    PubMed

    Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P

    2013-01-01

    RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis.

  9. The Beginning and the End

    NASA Astrophysics Data System (ADS)

    Vidal, Clément

    We introduce six dimensions of philosophy. The first three deal with first-order knowledge about reality (descriptive, normative, and practical), the next two deal with second-order knowledge about knowledge (critical and dialectical), and the sixth dimension (synthetic) integrates the other five. We describe and illustrate the dimensions with Leo Apostel's worldview program. Then we argue that we all need a worldview to interact with our world and to give a meaning to our lives. Such a worldview can be more or less explicit, and we argue that for rational discourse it is essential to make it as explicit as possible. We illustrate the dynamic interrelation of the different worldview components with a cybernetic diagram.

  10. Relativistic Newtonian Dynamics under a central force

    NASA Astrophysics Data System (ADS)

    Friedman, Yaakov

    2016-10-01

    Planck's formula and General Relativity indicate that potential energy influences spacetime. Using Einstein's Equivalence Principle and an extension of his Clock Hypothesis, an explicit description of this influence is derived. We present a new relativity model by incorporating the influence of the potential energy on spacetime in Newton's dynamics for motion under a central force. This model extends the model used by Friedman and Steiner (EPL, 113 (2016) 39001) to obtain the exact precession of Mercury without curving spacetime. We also present a solution of this model for a hydrogen-like atom, which explains the reason for a probabilistic description.

  11. Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Wood, William A.; vanLeer, Bram

    1999-01-01

    A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.

  12. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted

    1990-01-01

    Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.

  13. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    PubMed

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  14. An L-stable method for solving stiff hydrodynamics

    NASA Astrophysics Data System (ADS)

    Li, Shengtai

    2017-07-01

    We develop a new method for simulating the coupled dynamics of gas and multi-species dust grains. The dust grains are treated as pressure-less fluids and their coupling with gas is through stiff drag terms. If an explicit method is used, the numerical time step is subject to the stopping time of the dust particles, which can become extremely small for small grains. The previous semi-implicit method [1] uses second-order trapezoidal rule (TR) on the stiff drag terms and it works only for moderately small size of the dust particles. This is because TR method is only A-stable not L-stable. In this work, we use TR-BDF2 method [2] for the stiff terms in the coupled hydrodynamic equations. The L-stability of TR-BDF2 proves essential in treating a number of dust species. The combination of TR-BDF2 method with the explicit discretization of other hydro terms can solve a wide variety of stiff hydrodynamics equations accurately and efficiently. We have implemented our method in our LA-COMPASS (Los Alamos Computational Astrophysics Suite) package. We have applied the code to simulate some dusty proto-planetary disks and obtained very good match with astronomical observations.

  15. Concurrent processing simulation of the space station

    NASA Technical Reports Server (NTRS)

    Gluck, R.; Hale, A. L.; Sunkel, John W.

    1989-01-01

    The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.

  16. A functional-dynamic reflection on participatory processes in modeling projects.

    PubMed

    Seidl, Roman

    2015-12-01

    The participation of nonscientists in modeling projects/studies is increasingly employed to fulfill different functions. However, it is not well investigated if and how explicitly these functions and the dynamics of a participatory process are reflected by modeling projects in particular. In this review study, I explore participatory modeling projects from a functional-dynamic process perspective. The main differences among projects relate to the functions of participation-most often, more than one per project can be identified, along with the degree of explicit reflection (i.e., awareness and anticipation) on the dynamic process perspective. Moreover, two main approaches are revealed: participatory modeling covering diverse approaches and companion modeling. It becomes apparent that the degree of reflection on the participatory process itself is not always explicit and perfectly visible in the descriptions of the modeling projects. Thus, the use of common protocols or templates is discussed to facilitate project planning, as well as the publication of project results. A generic template may help, not in providing details of a project or model development, but in explicitly reflecting on the participatory process. It can serve to systematize the particular project's approach to stakeholder collaboration, and thus quality management.

  17. Action recognition using mined hierarchical compound features.

    PubMed

    Gilbert, Andrew; Illingworth, John; Bowden, Richard

    2011-05-01

    The field of Action Recognition has seen a large increase in activity in recent years. Much of the progress has been through incorporating ideas from single-frame object recognition and adapting them for temporal-based action recognition. Inspired by the success of interest points in the 2D spatial domain, their 3D (space-time) counterparts typically form the basic components used to describe actions, and in action recognition the features used are often engineered to fire sparsely. This is to ensure that the problem is tractable; however, this can sacrifice recognition accuracy as it cannot be assumed that the optimum features in terms of class discrimination are obtained from this approach. In contrast, we propose to initially use an overcomplete set of simple 2D corners in both space and time. These are grouped spatially and temporally using a hierarchical process, with an increasing search area. At each stage of the hierarchy, the most distinctive and descriptive features are learned efficiently through data mining. This allows large amounts of data to be searched for frequently reoccurring patterns of features. At each level of the hierarchy, the mined compound features become more complex, discriminative, and sparse. This results in fast, accurate recognition with real-time performance on high-resolution video. As the compound features are constructed and selected based upon their ability to discriminate, their speed and accuracy increase at each level of the hierarchy. The approach is tested on four state-of-the-art data sets, the popular KTH data set to provide a comparison with other state-of-the-art approaches, the Multi-KTH data set to illustrate performance at simultaneous multiaction classification, despite no explicit localization information provided during training. Finally, the recent Hollywood and Hollywood2 data sets provide challenging complex actions taken from commercial movie sequences. For all four data sets, the proposed hierarchical approach outperforms all other methods reported thus far in the literature and can achieve real-time operation.

  18. Numerical simulation of the kinetic effects in the solar wind

    NASA Astrophysics Data System (ADS)

    Sokolov, I.; Toth, G.; Gombosi, T. I.

    2017-12-01

    Global numerical simulations of the solar wind are usually based on the ideal or resistive MagnetoHydroDynamics (MHD) equations. Within a framework of MHD the electric field is assumed to vanish in the co-moving frame of reference (ideal MHD) or to obey a simple and non-physical scalar Ohm's law (resistive MHD). The Maxwellian distribution functions are assumed, the electron and ion temperatures may be different. Non-disversive MHD waves can be present in this numerical model. The averaged equations for MHD turbulence may be included as well as the energy and momentum exchange between the turbulent and regular motion. With the use of explicit numerical scheme, the time step is controlled by the MHD wave propagtion time across the numerical cell (the CFL condition) More refined approach includes the Hall effect vie the generalized Ohm's law. The Lorentz force acting on light electrons is assumed to vanish, which gives the expression for local electric field in terms of the total electric current, the ion current as well as the electron pressure gradient and magnetic field. The waves (whistlers, ion-cyclotron waves etc) aquire dispersion and the short-wavelength perturbations propagate with elevated speed thus strengthening the CFL condition. If the grid size is sufficiently small to resolve ion skindepth scale, then the timestep is much shorter than the ion gyration period. The next natural step is to use hybrid code to resolve the ion kinetic effects. The hybrid numerical scheme employs the same generalized Ohm's law as Hall MHD and suffers from the same constraint on the time step while solving evolution of the electromagnetic field. The important distiction, however, is that by sloving particle motion for ions we can achieve more detailed description of the kinetic effect without significant degrade in the computational efficiency, because the time-step is sufficient to resolve the particle gyration. We present the fisrt numerical results from coupled BATS-R-US+ALTOR code as applied to kinetic simulations of the solar wind.

  19. The free energy landscape for beta hairpin folding in explicit water.

    PubMed

    Zhou, R; Berne, B J; Germain, R

    2001-12-18

    The folding free energy landscape of the C-terminal beta hairpin of protein G has been explored in this study with explicit solvent under periodic boundary condition and OPLSAA force field. A highly parallel replica exchange method that combines molecular dynamics trajectories with a temperature exchange Monte Carlo process is used for sampling with the help of a new efficient algorithm P3ME/RESPA. The simulation results show that the hydrophobic core and the beta strand hydrogen bond form at roughly the same time. The free energy landscape with respect to various reaction coordinates is found to be rugged at low temperatures and becomes a smooth funnel-like landscape at about 360 K. In contrast to some very recent studies, no significant helical content has been found in our simulation at all temperatures studied. The beta hairpin population and hydrogen-bond probability are in reasonable agreement with the experiment at biological temperature, but both decay more slowly than the experiment with temperature.

  20. The free energy landscape for hairpin folding in explicit water

    NASA Astrophysics Data System (ADS)

    Zhou, Ruhong; Berne, Bruce J.; Germain, Robert

    2001-12-01

    The folding free energy landscape of the C-terminal hairpin of protein G has been explored in this study with explicit solvent under periodic boundary condition and OPLSAA force field. A highly parallel replica exchange method that combines molecular dynamics trajectories with a temperature exchange Monte Carlo process is used for sampling with the help of a new efficient algorithm P3ME/RESPA. The simulation results show that the hydrophobic core and the strand hydrogen bond form at roughly the same time. The free energy landscape with respect to various reaction coordinates is found to be rugged at low temperatures and becomes a smooth funnel-like landscape at about 360 K. In contrast to some very recent studies, no significant helical content has been found in our simulation at all temperatures studied. The β hairpin population and hydrogen-bond probability are in reasonable agreement with the experiment at biological temperature, but both decay more slowly than the experiment with temperature.

  1. The free energy landscape for β hairpin folding in explicit water

    PubMed Central

    Zhou, Ruhong; Berne, Bruce J.; Germain, Robert

    2001-01-01

    The folding free energy landscape of the C-terminal β hairpin of protein G has been explored in this study with explicit solvent under periodic boundary condition and oplsaa force field. A highly parallel replica exchange method that combines molecular dynamics trajectories with a temperature exchange Monte Carlo process is used for sampling with the help of a new efficient algorithm P3ME/RESPA. The simulation results show that the hydrophobic core and the β strand hydrogen bond form at roughly the same time. The free energy landscape with respect to various reaction coordinates is found to be rugged at low temperatures and becomes a smooth funnel-like landscape at about 360 K. In contrast to some very recent studies, no significant helical content has been found in our simulation at all temperatures studied. The β hairpin population and hydrogen-bond probability are in reasonable agreement with the experiment at biological temperature, but both decay more slowly than the experiment with temperature. PMID:11752441

  2. The role of assessment packages for diagnostic consultations: A conversation analytic perspective.

    PubMed

    Rossen, Camilla B; Buus, Niels; Stenager, Egon; Stenager, Elsebeth

    2015-05-01

    This article reports a conversation analysis of assessment package consultations. Healthcare delivery packages belong to a highly structured mode of healthcare delivery, in which specific courses of healthcare interventions related to assessment and treatment are predefined, both as to timing and content. Assessment packages are widely used in an increasing number of medical specialities; however, there is a lack of knowledge about how packaged assessment influences the interaction between doctor and patient. In this study, we investigate the final consultation in assessment packages, which is when the final clarification of the patient's symptoms takes place. The primary data of the study were eight audio recordings of consultations, and the secondary data were ethnographic field descriptions. In most consultations, packaged assessment was a resource as it provided fast and efficient clarification. In most cases, clarification was treated as good news since it either confirmed the absence of a serious disease or resulted in a diagnosis leading to relevant treatment offers. However, in some cases, clarification was not perceived as good news. This was the case in consultations with patients whose goal was to leave the consultation with clarification in the form of a definite diagnosis, but who were not offered such clarification. These patients negotiated the outcome of the consultation by applying implicit and explicit pressure, which induced the doctors to disregard the boundaries of the package and offer the patient more tests. The study highlights some of the problems related to introducing narrow, specialized package assessment. © The Author(s) 2014.

  3. The Real World of the Ivory Tower: Linking Classroom and Practice via Pedagogical Modeling

    ERIC Educational Resources Information Center

    Campbell, Carolyn; Scott-Lincourt, Rose; Brennan, Kimberley

    2008-01-01

    The authors explore the pedagogical principles of congruency, modeling, and transfer of learning through the description and analysis of a course entitled "The Theory and Practice of Anti-oppressive Social Work." Initially reviewing the literature related to the above concepts, they describe an instructor's attempt to explicitly model, via a range…

  4. Hatching Plans: Pedagogy and Discourse within an El Sistema-Inspired Music Program

    ERIC Educational Resources Information Center

    Dobson, Nicolas

    2016-01-01

    In this article, I draw on my experience as an instrumental tutor with a music program inspired by and explicitly linked to El Sistema, to explore new perspectives on Sistema-based pedagogy and management. Detailed ethnographic description of an orchestral session provides a first-hand account of the program's pedagogy, which I then contextualize…

  5. Inferring heuristic classification hierarchies from natural language input

    NASA Technical Reports Server (NTRS)

    Hull, Richard; Gomez, Fernando

    1993-01-01

    A methodology for inferring hierarchies representing heuristic knowledge about the check out, control, and monitoring sub-system (CCMS) of the space shuttle launch processing system from natural language input is explained. Our method identifies failures explicitly and implicitly described in natural language by domain experts and uses those descriptions to recommend classifications for inclusion in the experts' heuristic hierarchies.

  6. Technical Meeting Avionics Section Air Armament Division Held at Nellis Air Force Base, Nevada on December 1, 2 and 3 1982. Declassified Extended Abstracts.

    DTIC Science & Technology

    1982-01-01

    the FAETS Operational Scenario, followed by the FAETS Description and Operation. FAETS Specifications will be given, as well as the difinition of the...aircraft, expanded basing, new or improved avionics and new or improved armament. Furthermore, explicit quantitative ’ inter- dependence between

  7. Modeling fuels and fire effects in 3D: Model description and applications

    Treesearch

    Francois Pimont; Russell Parsons; Eric Rigolot; Francois de Coligny; Jean-Luc Dupuy; Philippe Dreyfus; Rodman R. Linn

    2016-01-01

    Scientists and managers critically need ways to assess how fuel treatments alter fire behavior, yet few tools currently exist for this purpose.We present a spatially-explicit-fuel-modeling system, FuelManager, which models fuels, vegetation growth, fire behavior (using a physics-based model, FIRETEC), and fire effects. FuelManager's flexible approach facilitates...

  8. A new method for recognizing quadric surfaces from range data and its application to telerobotics and automation

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1993-01-01

    The problem of recognizing and positioning of objects in three-dimensional space is important for robotics and navigation applications. In recent years, digital range data, also referred to as range images or depth maps, have been available for the analysis of three-dimensional objects owing to the development of several active range finding techniques. The distinct advantage of range images is the explicitness of the surface information available. Many industrial and navigational robotics tasks will be more easily accomplished if such explicit information can be efficiently interpreted. In this research, a new technique based on analytic geometry for the recognition and description of three-dimensional quadric surfaces from range images is presented. Beginning with the explicit representation of quadrics, a set of ten coefficients are determined for various three-dimensional surfaces. For each quadric surface, a unique set of two-dimensional curves which serve as a feature set is obtained from the various angles at which the object is intersected with a plane. Based on a discriminant method, each of the curves is classified as a parabola, circle, ellipse, hyperbola, or a line. Each quadric surface is shown to be uniquely characterized by a set of these two-dimensional curves, thus allowing discrimination from the others. Before the recognition process can be implemented, the range data have to undergo a set of pre-processing operations, thereby making it more presentable to classification algorithms. One such pre-processing step is to study the effect of median filtering on raw range images. Utilizing a variety of surface curvature techniques, reliable sets of image data that approximate the shape of a quadric surface are determined. Since the initial orientation of the surfaces is unknown, a new technique is developed wherein all the rotation parameters are determined and subsequently eliminated. This approach enables us to position the quadric surfaces in a desired coordinate system. Experiments were conducted on raw range images of spheres, cylinders, and cones. Experiments were also performed on simulated data for surfaces such as hyperboloids of one and two sheets, elliptical and hyperbolic paraboloids, elliptical and hyperbolic cylinders, ellipsoids and the quadric cones. Both the real and simulated data yielded excellent results. Our approach is found to be more accurate and computationally inexpensive as compared to traditional approaches, such as the three-dimensional discriminant approach which involves evaluation of the rank of a matrix. Finally, we have proposed one other new approach, which involves the formulation of a mapping between the explicit and implicit forms of representing quadric surfaces. This approach, when fully realized, will yield a three-dimensional discriminant, which will recognize quadric surfaces based upon their component surfaces patches. This approach is faster than prior approaches and at the same time is invariant to pose and orientation of the surfaces in three-dimensional space.

  9. A new method for recognizing quadric surfaces from range data and its application to telerobotics and automation

    NASA Astrophysics Data System (ADS)

    Alvertos, Nicolas; Dcunha, Ivan

    1993-03-01

    The problem of recognizing and positioning of objects in three-dimensional space is important for robotics and navigation applications. In recent years, digital range data, also referred to as range images or depth maps, have been available for the analysis of three-dimensional objects owing to the development of several active range finding techniques. The distinct advantage of range images is the explicitness of the surface information available. Many industrial and navigational robotics tasks will be more easily accomplished if such explicit information can be efficiently interpreted. In this research, a new technique based on analytic geometry for the recognition and description of three-dimensional quadric surfaces from range images is presented. Beginning with the explicit representation of quadrics, a set of ten coefficients are determined for various three-dimensional surfaces. For each quadric surface, a unique set of two-dimensional curves which serve as a feature set is obtained from the various angles at which the object is intersected with a plane. Based on a discriminant method, each of the curves is classified as a parabola, circle, ellipse, hyperbola, or a line. Each quadric surface is shown to be uniquely characterized by a set of these two-dimensional curves, thus allowing discrimination from the others. Before the recognition process can be implemented, the range data have to undergo a set of pre-processing operations, thereby making it more presentable to classification algorithms. One such pre-processing step is to study the effect of median filtering on raw range images. Utilizing a variety of surface curvature techniques, reliable sets of image data that approximate the shape of a quadric surface are determined. Since the initial orientation of the surfaces is unknown, a new technique is developed wherein all the rotation parameters are determined and subsequently eliminated. This approach enables us to position the quadric surfaces in a desired coordinate system. Experiments were conducted on raw range images of spheres, cylinders, and cones. Experiments were also performed on simulated data for surfaces such as hyperboloids of one and two sheets, elliptical and hyperbolic paraboloids, elliptical and hyperbolic cylinders, ellipsoids and the quadric cones. Both the real and simulated data yielded excellent results. Our approach is found to be more accurate and computationally inexpensive as compared to traditional approaches, such as the three-dimensional discriminant approach which involves evaluation of the rank of a matrix.

  10. TTLEM - an implicit-explicit (IMEX) scheme for modelling landscape evolution in MATLAB

    NASA Astrophysics Data System (ADS)

    Campforts, Benjamin; Schwanghart, Wolfgang

    2016-04-01

    Landscape evolution models (LEM) are essential to unravel interdependent earth surface processes. They are proven very useful to bridge several temporal and spatial timescales and have been successfully used to integrate existing empirical datasets. There is a growing consensus that landscapes evolve at least as much in the horizontal as in the vertical direction urging for an efficient implementation of dynamic drainage networks. Here we present a spatially explicit LEM, which is based on the object-oriented function library TopoToolbox 2 (Schwanghart and Scherler, 2014). Similar to other LEMs, rivers are considered to be the main drivers for simulated landscape evolution as they transmit pulses of tectonic perturbations and set the base level of surrounding hillslopes. Highly performant graph algorithms facilitate efficient updates of the flow directions to account for planform changes in the river network and the calculation of flow-related terrain attributes. We implement the model using an implicit-explicit (IMEX) scheme, i.e. different integrators are used for different terms in the diffusion-incision equation. While linear diffusion is solved using an implicit scheme, we calculate incision explicitly. Contrary to previously published LEMS, however, river incision is solved using a total volume method which is total variation diminishing in order to prevent numerical diffusion when solving the stream power law (Campforts and Govers, 2015). We show that the use of this updated numerical scheme alters both landscape topography and catchment wide erosion rates at a geological time scale. Finally, the availability of a graphical user interface facilitates user interaction, making the tool very useful both for research and didactical purposes. References Campforts, B., Govers, G., 2015. Keeping the edge: A numerical method that avoids knickpoint smearing when solving the stream power law. J. Geophys. Res. Earth Surf. 120, 1189-1205. doi:10.1002/2014JF003376 Schwanghart, W., Scherler, D., 2014. TopoToolbox 2 - MATLAB-based software for topographic analysis and modeling in Earth surface sciences. Earth Surf. Dyn. 2, 1-7. doi:10.5194/esurf-2-1-2014

  11. Time dependent density functional calculation of plasmon response in clusters

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Zhang, Feng-Shou; Eric, Suraud

    2003-02-01

    We have introduced a theoretical scheme for the efficient description of the optical response of a cluster based on the time-dependent density functional theory. The practical implementation is done by means of the fully fledged time-dependent local density approximation scheme, which is solved directly in the time domain without any linearization. As an example we consider the simple Na2 cluster and compute its surface plasmon photoabsorption cross section, which is in good agreement with the experiments.

  12. Slave finite element for non-linear analysis of engine structures. Volume 2: Programmer's manual and user's manual

    NASA Technical Reports Server (NTRS)

    Witkop, D. L.; Dale, B. J.; Gellin, S.

    1991-01-01

    The programming aspects of SFENES are described in the User's Manual. The information presented is provided for the installation programmer. It is sufficient to fully describe the general program logic and required peripheral storage. All element generated data is stored externally to reduce required memory allocation. A separate section is devoted to the description of these files thereby permitting the optimization of Input/Output (I/O) time through efficient buffer descriptions. Individual subroutine descriptions are presented along with the complete Fortran source listings. A short description of the major control, computation, and I/O phases is included to aid in obtaining an overall familiarity with the program's components. Finally, a discussion of the suggested overlay structure which allows the program to execute with a reasonable amount of memory allocation is presented.

  13. Constant pH Molecular Dynamics in Explicit Solvent with Enveloping Distribution Sampling and Hamiltonian Exchange

    PubMed Central

    2015-01-01

    We present a new computational approach for constant pH simulations in explicit solvent based on the combination of the enveloping distribution sampling (EDS) and Hamiltonian replica exchange (HREX) methods. Unlike constant pH methods based on variable and continuous charge models, our method is based on discrete protonation states. EDS generates a hybrid Hamiltonian of different protonation states. A smoothness parameter s is used to control the heights of energy barriers of the hybrid-state energy landscape. A small s value facilitates state transitions by lowering energy barriers. Replica exchange between EDS potentials with different s values allows us to readily obtain a thermodynamically accurate ensemble of multiple protonation states with frequent state transitions. The analysis is performed with an ensemble obtained from an EDS Hamiltonian without smoothing, s = ∞, which strictly follows the minimum energy surface of the end states. The accuracy and efficiency of this method is tested on aspartic acid, lysine, and glutamic acid, which have two protonation states, a histidine with three states, a four-residue peptide with four states, and snake cardiotoxin with eight states. The pKa values estimated with the EDS-HREX method agree well with the experimental pKa values. The mean absolute errors of small benchmark systems range from 0.03 to 0.17 pKa units, and those of three titratable groups of snake cardiotoxin range from 0.2 to 1.6 pKa units. This study demonstrates that EDS-HREX is a potent theoretical framework, which gives the correct description of multiple protonation states and good calculated pKa values. PMID:25061443

  14. Development of a Meso-Scale Material Model for Ballistic Fabric and Its Use in Flexible-Armor Protection Systems

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Bell, W. C.; Arakere, G.; He, T.; Xie, X.; Cheeseman, B. A.

    2010-02-01

    A meso-scale ballistic material model for a prototypical plain-woven single-ply flexible armor is developed and implemented in a material user subroutine for the use in commercial explicit finite element programs. The main intent of the model is to attain computational efficiency when calculating the mechanical response of the multi-ply fabric-based flexible-armor material during its impact with various projectiles without significantly sacrificing the key physical aspects of the fabric microstructure, architecture, and behavior. To validate the new model, a comparative finite element method analysis is carried out in which: (a) the plain-woven single-ply fabric is modeled using conventional shell elements and weaving is done in an explicit manner by snaking the yarns through the fabric and (b) the fabric is treated as a planar continuum surface composed of conventional shell elements to which the new meso-scale unit-cell based material model is assigned. The results obtained show that the material model provides a reasonably good description for the fabric deformation and fracture behavior under different combinations of fixed and free boundary conditions. Finally, the model is used in an investigation of the ability of a multi-ply soft-body armor vest to protect the wearer from impact by a 9-mm round nose projectile. The effects of inter-ply friction, projectile/yarn friction, and the far-field boundary conditions are revealed and the results explained using simple wave mechanics principles, high-deformation rate material behavior, and the role of various energy-absorbing mechanisms in the fabric-based armor systems.

  15. Assessing implicit models for nonpolar mean solvation forces: The importance of dispersion and volume terms

    PubMed Central

    Wagoner, Jason A.; Baker, Nathan A.

    2006-01-01

    Continuum solvation models provide appealing alternatives to explicit solvent methods because of their ability to reproduce solvation effects while alleviating the need for expensive sampling. Our previous work has demonstrated that Poisson-Boltzmann methods are capable of faithfully reproducing polar explicit solvent forces for dilute protein systems; however, the popular solvent-accessible surface area model was shown to be incapable of accurately describing nonpolar solvation forces at atomic-length scales. Therefore, alternate continuum methods are needed to reproduce nonpolar interactions at the atomic scale. In the present work, we address this issue by supplementing the solvent-accessible surface area model with additional volume and dispersion integral terms suggested by scaled particle models and Weeks–Chandler–Andersen theory, respectively. This more complete nonpolar implicit solvent model shows very good agreement with explicit solvent results and suggests that, although often overlooked, the inclusion of appropriate dispersion and volume terms are essential for an accurate implicit solvent description of atomic-scale nonpolar forces. PMID:16709675

  16. Depth rotation and mirror-image reflection reduce affective preference as well as recognition memory for pictures of novel objects.

    PubMed

    Lawson, Rebecca

    2004-10-01

    In two experiments, the identification of novel 3-D objects was worse for depth-rotated and mirror-reflected views, compared with the study view in an implicit affective preference memory task, as well as in an explicit recognition memory task. In Experiment 1, recognition was worse and preference was lower when depth-rotated views of an object were paired with an unstudied object relative to trials when the study view of that object was shown. There was a similar trend for mirror-reflected views. In Experiment 2, the study view of an object was both recognized and preferred above chance when it was paired with either depth-rotated or mirror-reflected views of that object. These results suggest that view-sensitive representations of objects mediate performance in implicit, as well as explicit, memory tasks. The findings do not support the claim that separate episodic and structural description representations underlie performance in implicit and explicit memory tasks, respectively.

  17. Effects of menstrual cycle phase on ratings of implicitly erotic art.

    PubMed

    Rudski, Jeffrey M; Bernstein, Lauren R; Mitchell, Joy E

    2011-08-01

    Women's perceptions of and responses to explicitly erotic stimuli have been shown to vary across the menstrual cycle. The present study examined responses to implicit eroticism. A total of 83 women provided reactions to paintings by Georgia O'Keeffe in 6 day intervals over the course of 1 month. Among freely cycling women (n = 37), 31% of their descriptions included sexual themes during the first half of their cycle, dropping to 9% of descriptions in the second half. In women using oral contraceptives (n = 46), there was no significant difference in descriptions across the cycle (13% in the first half vs. 17% in the second half). Results were discussed in terms of evolutionary psychology and social-cognitive perspectives on the relationships between hormonal fluctuations and sexuality.

  18. Advances in the treatment of explicit water molecules in docking and binding free energy calculations.

    PubMed

    Hu, Xiao; Maffucci, Irene; Contini, Alessandro

    2018-05-13

    The inclusion of direct effects mediated by water during the ligand-receptor recognition is a hot-topic of modern computational chemistry applied to drug discovery and development. Docking or virtual screening with explicit hydration is still debatable, despite the successful cases that have been presented in the last years. Indeed, how to select the water molecules that will be included in the docking process or how the included waters should be treated remain open questions. In this review, we will discuss some of the most recent methods that can be used in computational drug discovery and drug development when the effect of a single water, or of a small network of interacting waters, needs to be explicitly considered. Here, we analyse software to aid the selection, or to predict the position, of water molecules that are going to be explicitly considered in later docking studies. We also present software and protocols able to efficiently treat flexible water molecules during docking, including examples of applications. Finally, we discuss methods based on molecular dynamics simulations that can be used to integrate docking studies or to reliably and efficiently compute binding energies of ligands in presence of interfacial or bridging water molecules. Software applications aiding the design of new drugs that exploit water molecules, either as displaceable residues or as bridges to the receptor, are constantly being developed. Although further validation is needed, workflows that explicitly consider water will probably become a standard for computational drug discovery soon. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. Explicit and implicit springback simulation in sheet metal forming using fully coupled ductile damage and distortional hardening model

    NASA Astrophysics Data System (ADS)

    Yetna n'jock, M.; Houssem, B.; Labergere, C.; Saanouni, K.; Zhenming, Y.

    2018-05-01

    The springback is an important phenomenon which accompanies the forming of metallic sheets especially for high strength materials. A quantitative prediction of springback becomes very important for newly developed material with high mechanical characteristics. In this work, a numerical methodology is developed to quantify this undesirable phenomenon. This methodoly is based on the use of both explicit and implicit finite element solvers of Abaqus®. The most important ingredient of this methodology consists on the use of highly predictive mechanical model. A thermodynamically-consistent, non-associative and fully anisotropic elastoplastic constitutive model strongly coupled with isotropic ductile damage and accounting for distortional hardening is then used. An algorithm for local integration of the complete set of the constitutive equations is developed. This algorithm considers the rotated frame formulation (RFF) to ensure the incremental objectivity of the model in the framework of finite strains. This algorithm is implemented in both explicit (Abaqus/Explicit®) and implicit (Abaqus/Standard®) solvers of Abaqus® through the users routine VUMAT and UMAT respectively. The implicit solver of Abaqus® has been used to study spingback as it is generally a quasi-static unloading. In order to compare the methods `efficiency, the explicit method (Dynamic Relaxation Method) proposed by Rayleigh has been also used for springback prediction. The results obtained within U draw/bending benchmark are studied, discussed and compared with experimental results as reference. Finally, the purpose of this work is to evaluate the reliability of different methods predict efficiently springback in sheet metal forming.

  20. V and V of Lexical, Syntactic and Semantic Properties for Interactive Systems Through Model Checking of Formal Description of Dialog

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume P.; Martinie, Celia; Palanque, Philippe

    2013-01-01

    During early phases of the development of an interactive system, future system properties are identified (through interaction with end users in the brainstorming and prototyping phase of the application, or by other stakehold-ers) imposing requirements on the final system. They can be specific to the application under development or generic to all applications such as usability principles. Instances of specific properties include visibility of the aircraft altitude, speed… in the cockpit and the continuous possibility of disengaging the autopilot in whatever state the aircraft is. Instances of generic properties include availability of undo (for undoable functions) and availability of a progression bar for functions lasting more than four seconds. While behavioral models of interactive systems using formal description techniques provide complete and unambiguous descriptions of states and state changes, it does not provide explicit representation of the absence or presence of properties. Assessing that the system that has been built is the right system remains a challenge usually met through extensive use and acceptance tests. By the explicit representation of properties and the availability of tools to support checking these properties, it becomes possible to provide developers with means for systematic exploration of the behavioral models and assessment of the presence or absence of these properties. This paper proposes the synergistic use two tools for checking both generic and specific properties of interactive applications: Petshop and Java PathFinder. Petshop is dedicated to the description of interactive system behavior. Java PathFinder is dedicated to the runtime verification of Java applications and as an extension dedicated to User Interfaces. This approach is exemplified on a safety critical application in the area of interactive cockpits for large civil aircrafts.

  1. Simulation evaluation of TIMER, a time-based, terminal air traffic, flow-management concept

    NASA Technical Reports Server (NTRS)

    Credeur, Leonard; Capron, William R.

    1989-01-01

    A description of a time-based, extended terminal area ATC concept called Traffic Intelligence for the Management of Efficient Runway scheduling (TIMER) and the results of a fast-time evaluation are presented. The TIMER concept is intended to bridge the gap between today's ATC system and a future automated time-based ATC system. The TIMER concept integrates en route metering, fuel-efficient cruise and profile descents, terminal time-based sequencing and spacing together with computer-generated controller aids, to improve delivery precision for fuller use of runway capacity. Simulation results identify and show the effects and interactions of such key variables as horizon of control location, delivery time error at both the metering fix and runway threshold, aircraft separation requirements, delay discounting, wind, aircraft heading and speed errors, and knowledge of final approach speed.

  2. Probabilistic seismic loss estimation via endurance time method

    NASA Astrophysics Data System (ADS)

    Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.

    2017-01-01

    Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.

  3. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  4. Effects of Neutron-Star Dynamic Tides on Gravitational Waveforms within the Effective-One-Body Approach

    NASA Astrophysics Data System (ADS)

    Hinderer, Tanja; Taracchini, Andrea; Foucart, Francois; Buonanno, Alessandra; Steinhoff, Jan; Duez, Matthew; Kidder, Lawrence E.; Pfeiffer, Harald P.; Scheel, Mark A.; Szilagyi, Bela; Hotokezaka, Kenta; Kyutoku, Koutarou; Shibata, Masaru; Carpenter, Cory W.

    2016-05-01

    Extracting the unique information on ultradense nuclear matter from the gravitational waves emitted by merging neutron-star binaries requires robust theoretical models of the signal. We develop a novel effective-one-body waveform model that includes, for the first time, dynamic (instead of only adiabatic) tides of the neutron star as well as the merger signal for neutron-star-black-hole binaries. We demonstrate the importance of the dynamic tides by comparing our model against new numerical-relativity simulations of nonspinning neutron-star-black-hole binaries spanning more than 24 gravitational-wave cycles, and to other existing numerical simulations for double neutron-star systems. Furthermore, we derive an effective description that makes explicit the dependence of matter effects on two key parameters: tidal deformability and fundamental oscillation frequency.

  5. Effects of Neutron-Star Dynamic Tides on Gravitational Waveforms within the Effective-One-Body Approach.

    PubMed

    Hinderer, Tanja; Taracchini, Andrea; Foucart, Francois; Buonanno, Alessandra; Steinhoff, Jan; Duez, Matthew; Kidder, Lawrence E; Pfeiffer, Harald P; Scheel, Mark A; Szilagyi, Bela; Hotokezaka, Kenta; Kyutoku, Koutarou; Shibata, Masaru; Carpenter, Cory W

    2016-05-06

    Extracting the unique information on ultradense nuclear matter from the gravitational waves emitted by merging neutron-star binaries requires robust theoretical models of the signal. We develop a novel effective-one-body waveform model that includes, for the first time, dynamic (instead of only adiabatic) tides of the neutron star as well as the merger signal for neutron-star-black-hole binaries. We demonstrate the importance of the dynamic tides by comparing our model against new numerical-relativity simulations of nonspinning neutron-star-black-hole binaries spanning more than 24 gravitational-wave cycles, and to other existing numerical simulations for double neutron-star systems. Furthermore, we derive an effective description that makes explicit the dependence of matter effects on two key parameters: tidal deformability and fundamental oscillation frequency.

  6. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less

  7. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    NASA Astrophysics Data System (ADS)

    Duru, Kenneth; Dunham, Eric M.

    2016-01-01

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.

  8. A multigrid nonoscillatory method for computing high speed flows

    NASA Technical Reports Server (NTRS)

    Li, C. P.; Shieh, T. H.

    1993-01-01

    A multigrid method using different smoothers has been developed to solve the Euler equations discretized by a nonoscillatory scheme up to fourth order accuracy. The best smoothing property is provided by a five-stage Runge-Kutta technique with optimized coefficients, yet the most efficient smoother is a backward Euler technique in factored and diagonalized form. The singlegrid solution for a hypersonic, viscous conic flow is in excellent agreement with the solution obtained by the third order MUSCL and Roe's method. Mach 8 inviscid flow computations for a complete entry probe have shown that the accuracy is at least as good as the symmetric TVD scheme of Yee and Harten. The implicit multigrid method is four times more efficient than the explicit multigrid technique and 3.5 times faster than the single-grid implicit technique. For a Mach 8.7 inviscid flow over a blunt delta wing at 30 deg incidence, the CPU reduction factor from the three-level multigrid computation is 2.2 on a grid of 37 x 41 x 73 nodes.

  9. NASA Dryden Status: Aerospace Control and Guidance Sub-Committee Meeting 109

    NASA Technical Reports Server (NTRS)

    Jacobson, Steven R.

    2012-01-01

    NASA Dryden has been engaging in some exciting work that will enable lighter weight and more fuel efficient vehicles through advanced control and dynamics technologies. The main areas of emphasis are Enabling Light-weight Flexible Structures, real time control surface optimization for fuel efficiency and autonomous formation flight. This presentation provides a description of the current and upcoming work in these areas. Additionally, status is for the Dreamchaser pilot training activity and KQ-X autonomous aerial refueling.

  10. A time-domain finite element boundary integral approach for elastic wave scattering

    NASA Astrophysics Data System (ADS)

    Shi, F.; Lowe, M. J. S.; Skelton, E. A.; Craster, R. V.

    2018-04-01

    The response of complex scatterers, such as rough or branched cracks, to incident elastic waves is required in many areas of industrial importance such as those in non-destructive evaluation and related fields; we develop an approach to generate accurate and rapid simulations. To achieve this we develop, in the time domain, an implementation to efficiently couple the finite element (FE) method within a small local region, and the boundary integral (BI) globally. The FE explicit scheme is run in a local box to compute the surface displacement of the scatterer, by giving forcing signals to excitation nodes, which can lie on the scatterer itself. The required input forces on the excitation nodes are obtained with a reformulated FE equation, according to the incident displacement field. The surface displacements computed by the local FE are then projected, through time-domain BI formulae, to calculate the scattering signals with different modes. This new method yields huge improvements in the efficiency of FE simulations for scattering from complex scatterers. We present results using different shapes and boundary conditions, all simulated using this approach in both 2D and 3D, and then compare with full FE models and theoretical solutions to demonstrate the efficiency and accuracy of this numerical approach.

  11. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  12. How to calculate H3 better.

    PubMed

    Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik

    2009-11-14

    Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.

  13. Detection of Unknown LEO Satellite Using Radar Measurements

    NASA Astrophysics Data System (ADS)

    Kamensky, S.; Samotokhin, A.; Khutorovsky, Z.; Alfriend, T.

    While processing of the radar information aimed at satellite catalog maintenance some measurements do not correlate with cataloged and tracked satellites. These non-correlated measurements participate in the detection (primary orbit determination) of new (not cataloged) satellites. The satellite is considered newly detected when it is missing in the catalog and the primary orbit determination on the basis of the non-correlated measurements provides the accuracy sufficient for reliable correlation of future measurements. We will call this the detection condition. One non-correlated measurement in real conditions does not have enough accuracy and thus does not satisfy the detection condition. Two measurements separated by a revolution or more normally provides orbit determination with accuracy sufficient for selection of other measurements. However, it is not always possible to say with high probability (close to 1) that two measurements belong to one satellite. Three measurements for different revolutions, which are included into one orbit, have significantly higher chances to belong to one satellite. Thus the suggested detection (primary orbit determination) algorithm looks for three uncorrelated measurements in different revolutions for which we can determine the orbit inscribing them. The detection procedure based on search for the triplets is rather laborious. Thus only relatively high efficiency can be the reason for its practical implementation. The work presents the detailed description of the suggested detection procedure based on the search for triplets of uncorrelated measurements (for radar measurements). The break-ups of the tracked satellites provide the most difficult conditions for the operation of the detection algorithm and reveal explicitly its characteristics. The characteristics of time efficiency and reliability of the detected orbits are of maximum interest. Within this work we suggest to determine these characteristics using simulation of break-ups with further acquisition of measurements generated by the fragments. In particular, using simulation we can not only evaluate the characteristics of the algorithm but adjust its parameters for certain conditions: the orbit of the fragmented satellite, the features of the break-up, capabilities of detection radars etc. We describe the algorithm performing the simulation of radar measurements produced by the fragments of the parent satellite. This algorithm accounts of the basic factors affecting the characteristics of time efficiency and reliability of the detection. The catalog maintenance algorithm includes two major components detection and tracking. These are two processes permanently interacting with each other. This is actually in place for the processing of real radar data. The simulation must take this into account since one cannot obtain reliable characteristics of detection procedure simulating only this process. Thus we simulated both processes in their interaction. The work presents the results of simulation for the simplest case of a break-up in near-circular orbit with insignificant atmospheric drag. The simulations show rather high efficiency. We demonstrate as well that the characteristics of time efficiency and reliability of determined orbits essentially depend on the density of the observed break-up fragments.

  14. Resolved simulations of a granular-fluid flow through a check dam with a SPH-DCDEM model

    NASA Astrophysics Data System (ADS)

    Birjukovs Canelas, Ricardo; Domínguez, Jose; Crespo, Alejandro; Gómez-Gesteira, Moncho; Ferreira, Rui M. L.

    2017-04-01

    Debris flows represent some of the most relevant phenomena in geomorphological events. Due to the potential destructiveness of such flows, they are the target of a vast amount of research. Experimental research in laboratory facilities or in the field is fundamental to characterize the fundamental rheological properties of these flows and to provide insights on its structure. However, characterizing interparticle contacts and the structure of the motion of the granular phase is difficult, even in controlled laboratory conditions, and possible only for simple geometries. This work addresses the need for a numerical simulation tool applicable to granular-fluid mixtures featuring high spatial and temporal resolution, thus capable of resolving the motion of individual particles, including all interparticle contacts and susceptible to complement laboratory research. The DualSPHysics meshless numerical implementation based on Smoothed Particle Hydrodynamics (SPH) is expanded with a Distributed Contact Discrete Element Method (DCDEM) in order to explicitly solve the fluid and the solid phase. The specific objective is to test the SPH-DCDEM approach by comparing its results with experimental data. An experimental set-up for stony debris flows in a slit check dam is reproduced numerically, where solid material is introduced through a hopper assuring a constant solid discharge for the considered time interval. With each sediment particle possibly undergoing several simultaneous contacts, thousands of time-evolving interactions are efficiently treated due to the model's algorithmic structure and the HPC implementation of DualSPHysics. The results, comprising mainly of retention curves, are in good agreement with the measurements, correctly reproducing the changes in efficiency with slit spacing and density. The encouraging results, coupled with the prospect of so far unique insights into the internal dynamics of a debris flow show the potential of high-performance resolved approaches to the description of the flow and the study of its mitigation strategies. This research as partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 granted by the National Foundation for Science and Technology (FCT).

  15. An implicit higher-order spatially accurate scheme for solving time dependent flows on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Tomaro, Robert F.

    1998-07-01

    The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.

  16. Explicit Cloud Nucleation from Arbitrary Mixtures of Aerosol Types and Sizes Using an Ultra-Efficient In-Line Aerosol Bin Model in High-Resolution Simulations of Hurricanes

    NASA Astrophysics Data System (ADS)

    Walko, R. L.; Ashby, T.; Cotton, W. R.

    2017-12-01

    The fundamental role of atmospheric aerosols in the process of cloud droplet nucleation is well known, and there is ample evidence that the concentration, size, and chemistry of aerosols can strongly influence microphysical, thermodynamic, and ultimately dynamic properties and evolution of clouds and convective systems. With the increasing availability of observation- and model-based environmental representations of different types of anthropogenic and natural aerosols, there is increasing need for models to be able to represent which aerosols nucleate and which do not in supersaturated conditions. However, this is a very complex process that involves competition for water vapor between multiple aerosol species (chemistries) and different aerosol sizes within each species. Attempts have been made to parameterize the nucleation properties of mixtures of different aerosol species, but it is very difficult or impossible to represent all possible mixtures that may occur in practice. As part of a modeling study of the impact of anthropogenic and natural aerosols on hurricanes, we developed an ultra-efficient aerosol bin model to represent nucleation in a high-resolution atmospheric model that explicitly represents cloud- and subcloud-scale vertical motion. The bin model is activated at any time and location in a simulation where supersaturation occurs and is potentially capable of activating new cloud droplets. The bins are populated from the aerosol species that are present at the given time and location and by multiple sizes from each aerosol species according to a characteristic size distribution, and the chemistry of each species is represented by its absorption or adsorption characteristics. The bin model is integrated in time increments that are smaller than that of the atmospheric model in order to temporally resolve the peak supersaturation, which determines the total nucleated number. Even though on the order of 100 bins are typically utilized, this leads only to a 10 or 20% increase in overall computational cost due to the efficiency of the bin model. This method is highly versatile in that it automatically accommodates any possible number and mixture of different aerosol species. Applications of this model to simulations of Typhoon Nuri will be presented.

  17. Mergers of black-hole binaries with aligned spins: Waveform characteristics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Bernard J.; Department of Physics, University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, Maryland 21250; Baker, John G.

    2011-10-15

    We conduct a descriptive analysis of the multipolar structure of gravitational-radiation waveforms from equal-mass aligned-spin mergers, following an approach first presented in the complementary context of nonspinning black holes of varying mass ratio [J. G. Baker et al., Phys. Rev. D 78, 044046 (2008).]. We find that, as with the nonspinning mergers, the dominant waveform mode phases evolve together in lock-step through inspiral and merger, supporting the previous waveform description in terms of an adiabatically rigid rotator driving gravitational-wave emission--an implicit rotating source. We further apply the late-time merger-ringdown model for the rotational frequency introduced in [J. G. Baker etmore » al., Phys. Rev. D 78, 044046 (2008).], along with an improved amplitude model appropriate for the dominant (2, {+-}2) modes. This provides a quantitative description of the merger-ringdown waveforms, and suggests that the major features of these waveforms can be described with reference only to the intrinsic parameters associated with the state of the final black hole formed in the merger. We provide an explicit model for the merger-ringdown radiation, and demonstrate that this model agrees to fitting factors better than 95% with the original numerical waveforms for system masses above {approx}150M{sub {center_dot}}. This model may be directly applicable to gravitational-wave detection of intermediate-mass black-hole mergers.« less

  18. Strategies to assist uptake of pelvic floor muscle training for people with urinary incontinence: A clinician viewpoint.

    PubMed

    Slade, Susan C; Hay-Smith, Jean; Mastwyk, Sally; Morris, Meg E; Frawley, Helena

    2018-05-24

    The experiences and information needs of clinicians who use pelvic floor muscle training to manage urinary incontinence were explored. Qualitative methods were used to conduct thematic analysis of data collected from clinician focus groups and interviews. Participants were registered physiotherapists and continence nurses in Melbourne, Australia. Recruitment was through a combination of purposive and "snowball" sampling and continued until data adequacy was reached. Twenty-eight physiotherapists and one continence nurse participated in seven focus groups and one interview. The main finding communicated by the participants was that pelvic floor muscle training requires comprehensive descriptions of program details in order for clinicians to implement evidence-based interventions. The following themes were identified: (1) pelvic floor muscle training tailored to the needs of each individual is essential; (2) training-specific cues and verbal prompts assist patients to learn and engage with exercises; and (3) clinicians can benefit from research summaries and reports that provide explicit and comprehensive descriptions and decision rules about intervention content and progression. The data indicated that some clinicians can have difficulty interpreting and applying research findings because it is not always well reported. Clinicians who use pelvic floor muscle training to treat urinary incontinence advised can benefit from accessing explicit details of interventions tested in research and reported as effective. They viewed tailoring therapy to individual goals and the use of verbal prompts and visualization cues as important engagement strategies for effective exercise performance. Explicit reporting could be facilitated by using an exercise guideline template, such as the Consensus on Exercise Reporting Template (CERT). © 2018 Wiley Periodicals, Inc.

  19. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  20. The fourth dimension of life: fractal geometry and allometric scaling of organisms.

    PubMed

    West, G B; Brown, J H; Enquist, B J

    1999-06-04

    Fractal-like networks effectively endow life with an additional fourth spatial dimension. This is the origin of quarter-power scaling that is so pervasive in biology. Organisms have evolved hierarchical branching networks that terminate in size-invariant units, such as capillaries, leaves, mitochondria, and oxidase molecules. Natural selection has tended to maximize both metabolic capacity, by maximizing the scaling of exchange surface areas, and internal efficiency, by minimizing the scaling of transport distances and times. These design principles are independent of detailed dynamics and explicit models and should apply to virtually all organisms.

  1. Effects of time delays on stability and Hopf bifurcation in a fractional ring-structured network with arbitrary neurons

    NASA Astrophysics Data System (ADS)

    Huang, Chengdai; Cao, Jinde; Xiao, Min; Alsaedi, Ahmed; Hayat, Tasawar

    2018-04-01

    This paper is comprehensively concerned with the dynamics of a class of high-dimension fractional ring-structured neural networks with multiple time delays. Based on the associated characteristic equation, the sum of time delays is regarded as the bifurcation parameter, and some explicit conditions for describing delay-dependent stability and emergence of Hopf bifurcation of such networks are derived. It reveals that the stability and bifurcation heavily relies on the sum of time delays for the proposed networks, and the stability performance of such networks can be markedly improved by selecting carefully the sum of time delays. Moreover, it is further displayed that both the order and the number of neurons can extremely influence the stability and bifurcation of such networks. The obtained criteria enormously generalize and improve the existing work. Finally, numerical examples are presented to verify the efficiency of the theoretical results.

  2. Efficient genetic algorithms using discretization scheduling.

    PubMed

    McLay, Laura A; Goldberg, David E

    2005-01-01

    In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations use numerical methods with varying discretization. In these types of applications, the cost and accuracy vary from discretization errors when implicit or explicit quadrature is used to estimate the function evaluations. This paper examines discretization scheduling, or how to vary the discretization within the genetic algorithm in order to use the least amount of computation time for a solution of a desired quality. The effectiveness of discretization scheduling can be determined by comparing its computation time to the computation time of a GA using a constant discretization. There are three ingredients for the discretization scheduling: population sizing, estimated time for each function evaluation and predicted convergence time analysis. Idealized one- and two-dimensional experiments and an inverse groundwater application illustrate the computational savings to be achieved from using discretization scheduling.

  3. A local time stepping algorithm for GPU-accelerated 2D shallow water models

    NASA Astrophysics Data System (ADS)

    Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo

    2018-01-01

    In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.

  4. Special solutions to Chazy equation

    NASA Astrophysics Data System (ADS)

    Varin, V. P.

    2017-02-01

    We consider the classical Chazy equation, which is known to be integrable in hypergeometric functions. But this solution has remained purely existential and was never used numerically. We give explicit formulas for hypergeometric solutions in terms of initial data. A special solution was found in the upper half plane H with the same tessellation of H as that of the modular group. This allowed us to derive some new identities for the Eisenstein series. We constructed a special solution in the unit disk and gave an explicit description of singularities on its natural boundary. A global solution to Chazy equation in elliptic and theta functions was found that allows parametrization of an arbitrary solution to Chazy equation. The results have applications to analytic number theory.

  5. Symbolic interactionism in grounded theory studies: women surviving with HIV/AIDS in rural northern Thailand.

    PubMed

    Klunklin, Areewan; Greenwood, Jennifer

    2006-01-01

    Although it is generally acknowledged that symbolic interactionism and grounded theory are connected, the precise nature of their connection remains implicit and unexplained. As a result, many grounded theory studies are undertaken without an explanatory framework. This in turn results in the description rather than the explanation of data determined. In this report, the authors make explicit and explain the nature of the connections between symbolic interactionism and grounded theory research. Specifically, they make explicit the connection between Blumer's methodological principles and processes and grounded theory methodology. In addition, the authors illustrate the explanatory power of symbolic interactionism in grounded theory using data from a study of the HIV/AIDS experiences of married and widowed Thai women.

  6. Children's science learning: A core skills approach.

    PubMed

    Tolmie, Andrew K; Ghazali, Zayba; Morris, Suzanne

    2016-09-01

    Research has identified the core skills that predict success during primary school in reading and arithmetic, and this knowledge increasingly informs teaching. However, there has been no comparable work that pinpoints the core skills that underlie success in science. The present paper attempts to redress this by examining candidate skills and considering what is known about the way in which they emerge, how they relate to each other and to other abilities, how they change with age, and how their growth may vary between topic areas. There is growing evidence that early-emerging tacit awareness of causal associations is initially separated from language-based causal knowledge, which is acquired in part from everyday conversation and shows inaccuracies not evident in tacit knowledge. Mapping of descriptive and explanatory language onto causal awareness appears therefore to be a key development, which promotes unified conceptual and procedural understanding. This account suggests that the core components of initial science learning are (1) accurate observation, (2) the ability to extract and reason explicitly about causal connections, and (3) knowledge of mechanisms that explain these connections. Observational ability is educationally inaccessible until integrated with verbal description and explanation, for instance, via collaborative group work tasks that require explicit reasoning with respect to joint observations. Descriptive ability and explanatory ability are further promoted by managed exposure to scientific vocabulary and use of scientific language. Scientific reasoning and hypothesis testing are later acquisitions that depend on this integration of systems and improved executive control. © 2016 The British Psychological Society.

  7. The explicit computation of integration algorithms and first integrals for ordinary differential equations with polynomials coefficients using trees

    NASA Technical Reports Server (NTRS)

    Crouch, P. E.; Grossman, Robert

    1992-01-01

    This note is concerned with the explicit symbolic computation of expressions involving differential operators and their actions on functions. The derivation of specialized numerical algorithms, the explicit symbolic computation of integrals of motion, and the explicit computation of normal forms for nonlinear systems all require such computations. More precisely, if R = k(x(sub 1),...,x(sub N)), where k = R or C, F denotes a differential operator with coefficients from R, and g member of R, we describe data structures and algorithms for efficiently computing g. The basic idea is to impose a multiplicative structure on the vector space with basis the set of finite rooted trees and whose nodes are labeled with the coefficients of the differential operators. Cancellations of two trees with r + 1 nodes translates into cancellation of O(N(exp r)) expressions involving the coefficient functions and their derivatives.

  8. Efficient dynamic modeling of manipulators containing closed kinematic loops

    NASA Astrophysics Data System (ADS)

    Ferretti, Gianni; Rocco, Paolo

    An approach to efficiently solve the forward dynamics problem for manipulators containing closed chains is proposed. The two main distinctive features of this approach are: the dynamics of the equivalent open loop tree structures (any closed loop can be in general modeled by imposing some additional kinematic constraints to a suitable tree structure) is computed through an efficient Newton Euler formulation; the constraint equations relative to the most commonly adopted closed chains in industrial manipulators are explicitly solved, thus, overcoming the redundancy of Lagrange's multipliers method while avoiding the inefficiency due to a numerical solution of the implicit constraint equations. The constraint equations considered for an explicit solution are those imposed by articulated gear mechanisms and planar closed chains (pantograph type structures). Articulated gear mechanisms are actually used in all industrial robots to transmit motion from actuators to links, while planar closed chains are usefully employed to increase the stiffness of the manipulators and their load capacity, as well to reduce the kinematic coupling of joint axes. The accuracy and the efficiency of the proposed approach are shown through a simulation test.

  9. 10 CFR 433.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... stage when the energy efficiency and sustainability details (such as insulation levels, HVAC systems, water-using systems, etc.) are either explicitly determined or implicitly included in a project cost...

  10. Extended Quantum Field Theory, Index Theory, and the Parity Anomaly

    NASA Astrophysics Data System (ADS)

    Müller, Lukas; Szabo, Richard J.

    2018-06-01

    We use techniques from functorial quantum field theory to provide a geometric description of the parity anomaly in fermionic systems coupled to background gauge and gravitational fields on odd-dimensional spacetimes. We give an explicit construction of a geometric cobordism bicategory which incorporates general background fields in a stack, and together with the theory of symmetric monoidal bicategories we use it to provide the concrete forms of invertible extended quantum field theories which capture anomalies in both the path integral and Hamiltonian frameworks. Specialising this situation by using the extension of the Atiyah-Patodi-Singer index theorem to manifolds with corners due to Loya and Melrose, we obtain a new Hamiltonian perspective on the parity anomaly. We compute explicitly the 2-cocycle of the projective representation of the gauge symmetry on the quantum state space, which is defined in a parity-symmetric way by suitably augmenting the standard chiral fermionic Fock spaces with Lagrangian subspaces of zero modes of the Dirac Hamiltonian that naturally appear in the index theorem. We describe the significance of our constructions for the bulk-boundary correspondence in a large class of time-reversal invariant gauge-gravity symmetry-protected topological phases of quantum matter with gapless charged boundary fermions, including the standard topological insulator in 3 + 1 dimensions.

  11. Using implicit attitudes of exercise importance to predict explicit exercise dependence symptoms and exercise behaviors.

    PubMed

    Forrest, Lauren N; Smith, April R; Fussner, Lauren M; Dodd, Dorian R; Clerkin, Elise M

    2016-01-01

    "Fast" (i.e., implicit) processing is relatively automatic; "slow" (i.e., explicit) processing is relatively controlled and can override automatic processing. These different processing types often produce different responses that uniquely predict behaviors. In the present study, we tested if explicit, self-reported symptoms of exercise dependence and an implicit association of exercise as important predicted exercise behaviors and change in problematic exercise attitudes. We assessed implicit attitudes of exercise importance and self-reported symptoms of exercise dependence at Time 1. Participants reported daily exercise behaviors for approximately one month, and then completed a Time 2 assessment of self-reported exercise dependence symptoms. Undergraduate males and females (Time 1, N = 93; Time 2, N = 74) tracked daily exercise behaviors for one month and completed an Implicit Association Test assessing implicit exercise importance and subscales of the Exercise Dependence Questionnaire (EDQ) assessing exercise dependence symptoms. Implicit attitudes of exercise importance and Time 1 EDQ scores predicted Time 2 EDQ scores. Further, implicit exercise importance and Time 1 EDQ scores predicted daily exercise intensity while Time 1 EDQ scores predicted the amount of days exercised. Implicit and explicit processing appear to uniquely predict exercise behaviors and attitudes. Given that different implicit and explicit processes may drive certain exercise factors (e.g., intensity and frequency, respectively), these behaviors may contribute to different aspects of exercise dependence.

  12. Using implicit attitudes of exercise importance to predict explicit exercise dependence symptoms and exercise behaviors

    PubMed Central

    Forrest, Lauren N.; Smith, April R.; Fussner, Lauren M.; Dodd, Dorian R.; Clerkin, Elise M.

    2015-01-01

    Objectives ”Fast” (i.e., implicit) processing is relatively automatic; “slow” (i.e., explicit) processing is relatively controlled and can override automatic processing. These different processing types often produce different responses that uniquely predict behaviors. In the present study, we tested if explicit, self-reported symptoms of exercise dependence and an implicit association of exercise as important predicted exercise behaviors and change in problematic exercise attitudes. Design We assessed implicit attitudes of exercise importance and self-reported symptoms of exercise dependence at Time 1. Participants reported daily exercise behaviors for approximately one month, and then completed a Time 2 assessment of self-reported exercise dependence symptoms. Method Undergraduate males and females (Time 1, N = 93; Time 2, N = 74) tracked daily exercise behaviors for one month and completed an Implicit Association Test assessing implicit exercise importance and subscales of the Exercise Dependence Questionnaire (EDQ) assessing exercise dependence symptoms. Results Implicit attitudes of exercise importance and Time 1 EDQ scores predicted Time 2 EDQ scores. Further, implicit exercise importance and Time 1 EDQ scores predicted daily exercise intensity while Time 1 EDQ scores predicted the amount of days exercised. Conclusion Implicit and explicit processing appear to uniquely predict exercise behaviors and attitudes. Given that different implicit and explicit processes may drive certain exercise factors (e.g., intensity and frequency, respectively), these behaviors may contribute to different aspects of exercise dependence. PMID:26195916

  13. 76 FR 35257 - Self-Regulatory Organizations; Chicago Stock Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-16

    ... Change To Add a Rule Concerning the CHX Book Feed June 10, 2011. Pursuant to Section 19(b)(1) of the... Terms of Substance of the Proposed Rule Change CHX proposes to add Article 4, Rule 1 (Book Feed) to include an explicit description of the Exchange's Book Feed information service. The text of this proposed...

  14. A Force Balanced Fragmentation Method for ab Initio Molecular Dynamic Simulation of Protein.

    PubMed

    Xu, Mingyuan; Zhu, Tong; Zhang, John Z H

    2018-01-01

    A force balanced generalized molecular fractionation with conjugate caps (FB-GMFCC) method is proposed for ab initio molecular dynamic simulation of proteins. In this approach, the energy of the protein is computed by a linear combination of the QM energies of individual residues and molecular fragments that account for the two-body interaction of hydrogen bond between backbone peptides. The atomic forces on the caped H atoms were corrected to conserve the total force of the protein. Using this approach, ab initio molecular dynamic simulation of an Ace-(ALA) 9 -NME linear peptide showed the conservation of the total energy of the system throughout the simulation. Further a more robust 110 ps ab initio molecular dynamic simulation was performed for a protein with 56 residues and 862 atoms in explicit water. Compared with the classical force field, the ab initio molecular dynamic simulations gave better description of the geometry of peptide bonds. Although further development is still needed, the current approach is highly efficient, trivially parallel, and can be applied to ab initio molecular dynamic simulation study of large proteins.

  15. Levodopa enhances explicit new-word learning in healthy adults: a preliminary study.

    PubMed

    Shellshear, Leanne; MacDonald, Anna D; Mahoney, Jeffrey; Finch, Emma; McMahon, Katie; Silburn, Peter; Nathan, Pradeep J; Copland, David A

    2015-09-01

    While the role of dopamine in modulating executive function, working memory and associative learning has been established; its role in word learning and language processing more generally is not clear. This preliminary study investigated the impact of increased synaptic dopamine levels on new-word learning ability in healthy young adults using an explicit learning paradigm. A double-blind, placebo-controlled, between-groups design was used. Participants completed five learning sessions over 1 week with levodopa or placebo administered at each session (five doses, 100 mg). Each session involved a study phase followed by a test phase. Test phases involved recall and recognition tests of the new (non-word) names previously paired with unfamiliar objects (half with semantic descriptions) during the study phase. The levodopa group showed superior recall accuracy for new words over five learning sessions compared with the placebo group and better recognition accuracy at a 1-month follow-up for words learnt with a semantic description. These findings suggest that dopamine boosts initial lexical acquisition and enhances longer-term consolidation of words learnt with semantic information, consistent with dopaminergic enhancement of semantic salience. Copyright © 2015 John Wiley & Sons, Ltd.

  16. COMOC: Three dimensional boundary region variant, programmer's manual

    NASA Technical Reports Server (NTRS)

    Orzechowski, J. A.; Baker, A. J.

    1974-01-01

    The three-dimensional boundary region variant of the COMOC computer program system solves the partial differential equation system governing certain three-dimensional flows of a viscous, heat conducting, multiple-species, compressible fluid including combustion. The solution is established in physical variables, using a finite element algorithm for the boundary value portion of the problem description in combination with an explicit marching technique for the initial value character. The computational lattice may be arbitrarily nonregular, and boundary condition constraints are readily applied. The theoretical foundation of the algorithm, a detailed description on the construction and operation of the program, and instructions on utilization of the many features of the code are presented.

  17. Trigeminal Neuralgia and Multiple Sclerosis: A Historical Perspective.

    PubMed

    Burkholder, David B; Koehler, Peter J; Boes, Christopher J

    2017-09-01

    Trigeminal neuralgia (TN) associated with multiple sclerosis (MS) was first described in Lehrbuch der Nervenkrankheiten für Ärzte und Studirende in 1894 by Hermann Oppenheim, including a pathologic description of trigeminal root entry zone demyelination. Early English-language translations in 1900 and 1904 did not so explicitly state this association compared with the German editions. The 1911 English-language translation described a more direct association. Other later descriptions were clinical with few pathologic reports, often referencing Oppenheim but citing the 1905 German or 1911 English editions of Lehrbuch. This discrepancy in part may be due to the translation differences of the original text.

  18. Stochastic maps, continuous approximation, and stable distribution

    NASA Astrophysics Data System (ADS)

    Kessler, David A.; Burov, Stanislav

    2017-10-01

    A continuous approximation framework for general nonlinear stochastic as well as deterministic discrete maps is developed. For the stochastic map with uncorelated Gaussian noise, by successively applying the Itô lemma, we obtain a Langevin type of equation. Specifically, we show how nonlinear maps give rise to a Langevin description that involves multiplicative noise. The multiplicative nature of the noise induces an additional effective force, not present in the absence of noise. We further exploit the continuum description and provide an explicit formula for the stable distribution of the stochastic map and conditions for its existence. Our results are in good agreement with numerical simulations of several maps.

  19. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  20. Real-Time Extended Interface Automata for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi; Man, Tianlong; Liu, Bin

    2014-01-01

    Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA) which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system. PMID:24892080

  1. Scalable and expressive medical terminologies.

    PubMed

    Mays, E; Weida, R; Dionne, R; Laker, M; White, B; Liang, C; Oles, F J

    1996-01-01

    The K-Rep system, based on description logic, is used to represent and reason with large and expressive controlled medical terminologies. Expressive concept descriptions incorporate semantically precise definitions composed using logical operators, together with important non-semantic information such as synonyms and codes. Examples are drawn from our experience with K-Rep in modeling the InterMed laboratory terminology and also developing a large clinical terminology now in production use at Kaiser-Permanente. System-level scalability of performance is achieved through an object-oriented database system which efficiently maps persistent memory to virtual memory. Equally important is conceptual scalability-the ability to support collaborative development, organization, and visualization of a substantial terminology as it evolves over time. K-Rep addresses this need by logically completing concept definitions and automatically classifying concepts in a taxonomy via subsumption inferences. The K-Rep system includes a general-purpose GUI environment for terminology development and browsing, a custom interface for formulary term maintenance, a C+2 application program interface, and a distributed client-server mode which provides lightweight clients with efficient run-time access to K-Rep by means of a scripting language.

  2. Dynamical Origin of Highly Efficient Energy Dissipation in Soft Magnetic Nanoparticles for Magnetic Hyperthermia Applications

    NASA Astrophysics Data System (ADS)

    Kim, Min-Kwan; Sim, Jaegun; Lee, Jae-Hyeok; Kim, Miyoung; Kim, Sang-Koog

    2018-05-01

    We explore robust magnetization-dynamic behaviors in soft magnetic nanoparticles in single-domain states and find their related high-efficiency energy-dissipation mechanism using finite-element micromagnetic simulations. We also make analytical derivations that provide deeper physical insights into the magnetization dynamics associated with Gilbert damping parameters under applications of time-varying rotating magnetic fields of different strengths and frequencies and static magnetic fields. Furthermore, we find that the mass-specific energy-dissipation rate at resonance in the steady-state regime changes remarkably with the strength of rotating fields and static fields for given damping constants. The associated magnetization dynamics are well interpreted with the help of the numerical calculation of analytically derived explicit forms. The high-efficiency energy-loss power can be obtained using soft magnetic nanoparticles in the single-domain state by tuning the frequency of rotating fields to the resonance frequency; what is more, it is controllable via the rotating and static field strengths for a given intrinsic damping constant. We provide a better and more efficient means of achieving specific loss power that can be implemented in magnetic hyperthermia applications.

  3. A fast multipole method combined with a reaction field for long-range electrostatics in molecular dynamics simulations: The effects of truncation on the properties of water

    NASA Astrophysics Data System (ADS)

    Mathias, Gerald; Egwolf, Bernhard; Nonella, Marco; Tavan, Paul

    2003-06-01

    We present a combination of the structure adapted multipole method with a reaction field (RF) correction for the efficient evaluation of electrostatic interactions in molecular dynamics simulations under periodic boundary conditions. The algorithm switches from an explicit electrostatics evaluation to a continuum description at the maximal distance that is consistent with the minimum image convention, and, thus, avoids the use of a periodic electrostatic potential. A physically motivated switching function enables charge clusters interacting with a given charge to smoothly move into the solvent continuum by passing through the spherical dielectric boundary surrounding this charge. This transition is complete as soon as the cluster has reached the so-called truncation radius Rc. The algorithm is used to examine the dependence of thermodynamic properties and correlation functions on Rc in the three point transferable intermolecular potential water model. Our test simulations on pure liquid water used either the RF correction or a straight cutoff and values of Rc ranging from 14 Å to 40 Å. In the RF setting, the thermodynamic properties and the correlation functions show convergence for Rc increasing towards 40 Å. In the straight cutoff case no such convergence is found. Here, in particular, the dipole-dipole correlation functions become completely artificial. The RF description of the long-range electrostatics is verified by comparison with the results of a particle-mesh Ewald simulation at identical conditions.

  4. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.

  5. A New Family of Compact High Order Coupled Time-Space Unconditionally Stable Vertical Advection Schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, F.; Debreu, L.

    2016-02-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.

  6. Momentum-Based Dynamics for Spacecraft with Chained Revolute Appendages

    NASA Technical Reports Server (NTRS)

    Queen, Steven; London, Ken; Gonzalez, Marcelo

    2005-01-01

    An efficient formulation is presented for a sub-class of multi-body dynamics problems that involve a six degree-of-freedom base body and a chain of N rigid linkages connected in series by single degree-of-freedom revolute joints. This general method is particularly well suited for simulations of spacecraft dynamics and control that include the modeling of an orbiting platform with or without internal degrees of freedom such as reaction wheels, dampers, and/or booms. In the present work, particular emphasis is placed on dynamic simulation of multi-linkage robotic manipulators. The differential equations of motion are explicitly given in terms of linear and angular momentum states, which can be evaluated recursively along a serial chain of linkages for an efficient real-time solution on par with the best of the O(N3) methods.

  7. Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods

    PubMed Central

    Cocco, Simona; Leibler, Stanislas; Monasson, Rémi

    2009-01-01

    Complexity of neural systems often makes impracticable explicit measurements of all interactions between their constituents. Inverse statistical physics approaches, which infer effective couplings between neurons from their spiking activity, have been so far hindered by their computational complexity. Here, we present 2 complementary, computationally efficient inverse algorithms based on the Ising and “leaky integrate-and-fire” models. We apply those algorithms to reanalyze multielectrode recordings in the salamander retina in darkness and under random visual stimulus. We find strong positive couplings between nearby ganglion cells common to both stimuli, whereas long-range couplings appear under random stimulus only. The uncertainty on the inferred couplings due to limitations in the recordings (duration, small area covered on the retina) is discussed. Our methods will allow real-time evaluation of couplings for large assemblies of neurons. PMID:19666487

  8. A solid reactor core thermal model for nuclear thermal rockets

    NASA Astrophysics Data System (ADS)

    Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.

    1991-01-01

    A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.

  9. Cooperative scattering and radiation pressure force in dense atomic clouds

    NASA Astrophysics Data System (ADS)

    Bachelard, R.; Piovella, N.; Courteille, Ph. W.

    2011-07-01

    Atomic clouds prepared in “timed Dicke” states, i.e. states where the phase of the oscillating atomic dipole moments linearly varies along one direction of space, are efficient sources of superradiant light emission [Scully , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.96.010501 96, 010501 (2006)]. Here, we show that, in contrast to previous assertions, timed Dicke states are not the states automatically generated by incident laser light. In reality, the atoms act back on the driving field because of the finite refraction of the cloud. This leads to nonuniform phase shifts, which, at higher optical densities, dramatically alter the cooperative scattering properties, as we show by explicit calculation of macroscopic observables, such as the radiation pressure force.

  10. Dynamic modeling of parallel robots for computed-torque control implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codourey, A.

    1998-12-01

    In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less

  11. Working memory moderates the effect of the integrative process of implicit and explicit autonomous motivation on academic achievement.

    PubMed

    Gareau, Alexandre; Gaudreau, Patrick

    2017-11-01

    In previous research, autonomous motivation (AM) has been found to be associated with school achievement, but the relation has been largely heterogeneous across studies. AM has typically been assessed with explicit measures such as self-report questionnaires. Recent self-determination theory (SDT) research has suggested that converging implicit and explicit measures can be taken to characterize the integrative process in SDT. Drawing from dual-process theories, we contended that explicit AM is likely to promote school achievement when it is part of an integrated cognitive system that combines easily accessible mental representations (i.e., implicit AM) and efficient executive functioning. A sample of 272 university students completed a questionnaire and a lexical decision task to assess their explicit and implicit AM, respectively, and they also completed working memory capacity measures. Grades were obtained at the end of the semester to examine the short-term prospective effect of implicit and explicit AM, working memory, and their interaction. Results of moderation analyses have provided support for a synergistic interaction in which the association between explicit AM and academic achievement was positive and significant only for individuals with high level of implicit AM. Moreover, working memory was moderating the synergistic effect of explicit and implicit AM. Explicit AM was positively associated with academic achievement for students with average-to-high levels of working memory capacity, but only if their motivation operated synergistically with high implicit AM. The integrative process thus seems to hold better proprieties for achievement than the sole effect of explicit AM. Implications for SDT are outlined. © 2017 The British Psychological Society.

  12. Explicit and implicit learning: The case of computer programming

    NASA Astrophysics Data System (ADS)

    Mancy, Rebecca

    The central question of this thesis concerns the role of explicit and implicit learning in the acquisition of a complex skill, namely computer programming. This issue is explored with reference to information processing models of memory drawn from cognitive science. These models indicate that conscious information processing occurs in working memory where information is stored and manipulated online, but that this mode of processing shows serious limitations in terms of capacity or resources. Some information processing models also indicate information processing in the absence of conscious awareness through automation and implicit learning. It was hypothesised that students would demonstrate implicit and explicit knowledge and that both would contribute to their performance in programming. This hypothesis was investigated via two empirical studies. The first concentrated on temporary storage and online processing in working memory and the second on implicit and explicit knowledge. Storage and processing were tested using two tools: temporary storage capacity was measured using a digit span test; processing was investigated with a disembedding test. The results were used to calculate correlation coefficients with performance on programming examinations. Individual differences in temporary storage had only a small role in predicting programming performance and this factor was not a major determinant of success. Individual differences in disembedding were more strongly related to programming achievement. The second study used interviews to investigate the use of implicit and explicit knowledge. Data were analysed according to a grounded theory paradigm. The results indicated that students possessed implicit and explicit knowledge, but that the balance between the two varied between students and that the most successful students did not necessarily possess greater explicit knowledge. The ways in which students described their knowledge led to the development of a framework which extends beyond the implicit-explicit dichotomy to four descriptive categories of knowledge along this dimension. Overall, the results demonstrated that explicit and implicit knowledge both contribute to the acquisition ofprogramming skills. Suggestions are made for further research, and the results are discussed in the context of their implications for education.

  13. The Inverse Optimal Control Problem for a Three-Loop Missile Autopilot

    NASA Astrophysics Data System (ADS)

    Hwang, Donghyeok; Tahk, Min-Jea

    2018-04-01

    The performance characteristics of the autopilot must have a fast response to intercept a maneuvering target and reasonable robustness for system stability under the effect of un-modeled dynamics and noise. By the conventional approach, the three-loop autopilot design is handled by time constant, damping factor and open-loop crossover frequency to achieve the desired performance requirements. Note that the general optimal theory can be also used to obtain the same gain as obtained from the conventional approach. The key idea of using optimal control technique for feedback gain design revolves around appropriate selection and interpretation of the performance index for which the control is optimal. This paper derives an explicit expression, which relates the weight parameters appearing in the quadratic performance index to the design parameters such as open-loop crossover frequency, phase margin, damping factor, or time constant, etc. Since all set of selection of design parameters do not guarantee existence of optimal control law, explicit inequalities, which are named the optimality criteria for the three-loop autopilot (OC3L), are derived to find out all set of design parameters for which the control law is optimal. Finally, based on OC3L, an efficient gain selection procedure is developed, where time constant is set to design objective and open-loop crossover frequency and phase margin as design constraints. The effectiveness of the proposed technique is illustrated through numerical simulations.

  14. PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting

    PubMed Central

    Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693

  15. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  16. Comparing probabilistic and descriptive analyses of time–dose–toxicity relationship for determining no-observed-adverse-effect level in drug development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glatard, Anaïs; Berges, Aliénor; Sahota, Tarjinder

    The no-observed-adverse-effect level (NOAEL) of a drug defined from animal studies is important for inferring a maximal safe dose in human. However, several issues are associated with its concept, determination and application. It is confined to the actual doses used in the study; becomes lower with increasing sample size or dose levels; and reflects the risk level seen in the experiment rather than what may be relevant for human. We explored a pharmacometric approach in an attempt to address these issues. We first used simulation to examine the behaviour of the NOAEL values as determined by current common practice; andmore » then fitted the probability of toxicity as a function of treatment duration and dose to data collected from all applicable toxicology studies of a test compound. Our investigation was in the context of an irreversible toxicity that is detected at the end of the study. Simulations illustrated NOAEL's dependency on experimental factors such as dose and sample size, as well as the underlying uncertainty. Modelling the probability as a continuous function of treatment duration and dose simultaneously to data from multiple studies allowed the estimation of the dose, along with its confidence interval, for a maximal risk level that might be deemed as acceptable for human. The model-based data integration also reconciled between-study inconsistency and explicitly provided maximised estimation confidence. Such alternative NOAEL determination method should be explored for its more efficient data use, more quantifiable insight to toxic doses, and the potential for more relevant animal-to-human translation. - Highlights: • Simulations revealed issues with NOAEL concept, determination and application. • Probabilistic modelling was used to address these issues. • The model integrated time-dose-toxicity data from multiple studies. • The approach uses data efficiently and may allow more meaningful human translation.« less

  17. Efficient and Unbiased Sampling of Biomolecular Systems in the Canonical Ensemble: A Review of Self-Guided Langevin Dynamics

    PubMed Central

    Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R.

    2013-01-01

    This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion “borrows” energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991

  18. Sexting by high school students: an exploratory and descriptive study.

    PubMed

    Strassberg, Donald S; McKinnon, Ryan K; Sustaíta, Michael A; Rullo, Jordan

    2013-01-01

    Recently, a phenomenon known as sexting, defined here as the transfer of sexually explicit photos via cell phone, has received substantial attention in the U.S. national media. To determine the current and potential future impact of sexting, more information about the behavior and the attitudes and beliefs surrounding it must be gathered, particularly as it relates to sexting by minors. The present study was designed to provide preliminary information about this phenomenon. Participants were 606 high school students (representing 98 % of the available student body) recruited from a single private high school in the southwestern U.S. Nearly 20 % of all participants reported they had ever sent a sexually explicit image of themselves via cell phone while almost twice as many reported that they had ever received a sexually explicit picture via cell phone and, of these, over 25 % indicated that they had forwarded such a picture to others. Of those reporting having sent a sexually explicit cell phone picture, over a third did so despite believing that there could be serious legal and other consequences attached to the behavior. Given the potential legal and psychological risks associated with sexting, it is important for adolescents, parents, school administrators, and even legislators and law enforcement to understand this behavior.

  19. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    NASA Astrophysics Data System (ADS)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  20. Conservative tightly-coupled simulations of stochastic multiscale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taverniers, Søren; Pigarov, Alexander Y.; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2016-05-15

    Multiphysics problems often involve components whose macroscopic dynamics is driven by microscopic random fluctuations. The fidelity of simulations of such systems depends on their ability to propagate these random fluctuations throughout a computational domain, including subdomains represented by deterministic solvers. When the constituent processes take place in nonoverlapping subdomains, system behavior can be modeled via a domain-decomposition approach that couples separate components at the interfaces between these subdomains. Its coupling algorithm has to maintain a stable and efficient numerical time integration even at high noise strength. We propose a conservative domain-decomposition algorithm in which tight coupling is achieved by employingmore » either Picard's or Newton's iterative method. Coupled diffusion equations, one of which has a Gaussian white-noise source term, provide a computational testbed for analysis of these two coupling strategies. Fully-converged (“implicit”) coupling with Newton's method typically outperforms its Picard counterpart, especially at high noise levels. This is because the number of Newton iterations scales linearly with the amplitude of the Gaussian noise, while the number of Picard iterations can scale superlinearly. At large time intervals between two subsequent inter-solver communications, the solution error for single-iteration (“explicit”) Picard's coupling can be several orders of magnitude higher than that for implicit coupling. Increasing the explicit coupling's communication frequency reduces this difference, but the resulting increase in computational cost can make it less efficient than implicit coupling at similar levels of solution error, depending on the communication frequency of the latter and the noise strength. This trend carries over into higher dimensions, although at high noise strength explicit coupling may be the only computationally viable option.« less

  1. Memory-Efficient Analysis of Dense Functional Connectomes.

    PubMed

    Loewe, Kristian; Donohue, Sarah E; Schoenfeld, Mircea A; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download.

  2. Memory-Efficient Analysis of Dense Functional Connectomes

    PubMed Central

    Loewe, Kristian; Donohue, Sarah E.; Schoenfeld, Mircea A.; Kruse, Rudolf; Borgelt, Christian

    2016-01-01

    The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download. PMID:27965565

  3. Graphics-processing-unit-accelerated finite-difference time-domain simulation of the interaction between ultrashort laser pulses and metal nanoparticles

    NASA Astrophysics Data System (ADS)

    Nikolskiy, V. P.; Stegailov, V. V.

    2018-01-01

    Metal nanoparticles (NPs) serve as important tools for many modern technologies. However, the proper microscopic models of the interaction between ultrashort laser pulses and metal NPs are currently not very well developed in many cases. One part of the problem is the description of the warm dense matter that is formed in NPs after intense irradiation. Another part of the problem is the description of the electromagnetic waves around NPs. Description of wave propagation requires the solution of Maxwell’s equations and the finite-difference time-domain (FDTD) method is the classic approach for solving them. There are many commercial and free implementations of FDTD, including the open source software that supports graphics processing unit (GPU) acceleration. In this report we present the results on the FDTD calculations for different cases of the interaction between ultrashort laser pulses and metal nanoparticles. Following our previous results, we analyze the efficiency of the GPU acceleration of the FDTD algorithm.

  4. Characterization of the Dispersal of Non-Domiciliated Triatoma dimidiata through the Selection of Spatially Explicit Models

    PubMed Central

    Barbu, Corentin; Dumonteil, Eric; Gourbière, Sébastien

    2010-01-01

    Background Chagas disease is a major parasitic disease in Latin America, prevented in part by vector control programs that reduce domestic populations of triatomines. However, the design of control strategies adapted to non-domiciliated vectors, such as Triatoma dimidiata, remains a challenge because it requires an accurate description of their spatio-temporal distributions, and a proper understanding of the underlying dispersal processes. Methodology/Principal Findings We combined extensive spatio-temporal data sets describing house infestation dynamics by T. dimidiata within a village, and spatially explicit population dynamics models in a selection model approach. Several models were implemented to provide theoretical predictions under different hypotheses on the origin of the dispersers and their dispersal characteristics, which we compared with the spatio-temporal pattern of infestation observed in the field. The best models fitted the dynamic of infestation described by a one year time-series, and also predicted with a very good accuracy the infestation process observed during a second replicate one year time-series. The parameterized models gave key insights into the dispersal of these vectors. i) About 55% of the triatomines infesting houses came from the peridomestic habitat, the rest corresponding to immigration from the sylvatic habitat, ii) dispersing triatomines were 5–15 times more attracted by houses than by peridomestic area, and iii) the moving individuals spread on average over rather small distances, typically 40–60 m/15 days. Conclusion/Significance Since these dispersal characteristics are associated with much higher abundance of insects in the periphery of the village, we discuss the possibility that spatially targeted interventions allow for optimizing the efficacy of vector control activities within villages. Such optimization could prove very useful in the context of limited resources devoted to vector control. PMID:20689823

  5. Benefits of explicit urban parameterization in regional climate modeling to study climate and city interactions

    NASA Astrophysics Data System (ADS)

    Daniel, M.; Lemonsu, Aude; Déqué, M.; Somot, S.; Alias, A.; Masson, V.

    2018-06-01

    Most climate models do not explicitly model urban areas and at best describe them as rock covers. Nonetheless, the very high resolutions reached now by the regional climate models may justify and require a more realistic parameterization of surface exchanges between urban canopy and atmosphere. To quantify the potential impact of urbanization on the regional climate, and evaluate the benefits of a detailed urban canopy model compared with a simpler approach, a sensitivity study was carried out over France at a 12-km horizontal resolution with the ALADIN-Climate regional model for 1980-2009 time period. Different descriptions of land use and urban modeling were compared, corresponding to an explicit modeling of cities with the urban canopy model TEB, a conventional and simpler approach representing urban areas as rocks, and a vegetated experiment for which cities are replaced by natural covers. A general evaluation of ALADIN-Climate was first done, that showed an overestimation of the incoming solar radiation but satisfying results in terms of precipitation and near-surface temperatures. The sensitivity analysis then highlighted that urban areas had a significant impact on modeled near-surface temperature. A further analysis on a few large French cities indicated that over the 30 years of simulation they all induced a warming effect both at daytime and nighttime with values up to + 1.5 °C for the city of Paris. The urban model also led to a regional warming extending beyond the urban areas boundaries. Finally, the comparison to temperature observations available for Paris area highlighted that the detailed urban canopy model improved the modeling of the urban heat island compared with a simpler approach.

  6. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  7. The impact of parasitoid emergence time on host-parasitoid population dynamics.

    PubMed

    Cobbold, Christina A; Roland, Jens; Lewis, Mark A

    2009-01-01

    We investigate the effect of parasitoid phenology on host-parasitoid population cycles. Recent experimental research has shown that parasitized hosts can continue to interact with their unparasitized counterparts through competition. Parasitoid phenology, in particular the timing of emergence from the host, determines the duration of this competition. We construct a discrete-time host-parasitoid model in which within-generation dynamics associated with parasitoid timing is explicitly incorporated. We found that late-emerging parasitoids induce less severe, but more frequent, host outbreaks, independent of the choice of competition model. The competition experienced by the parasitized host reduces the parasitoids' numerical response to changes in host numbers, preventing the 'boom-bust' dynamics associated with more efficient parasitoids. We tested our findings against experimental data for the forest tent caterpillar (Malacosoma disstria Hübner) system, where a large number of consecutive years at a high host density is synonymous with severe forest damage.

  8. Collision cross sections of N2 by H+ impact at keV energies within time-dependent density-functional theory

    NASA Astrophysics Data System (ADS)

    Yu, W.; Gao, C.-Z.; Zhang, Y.; Zhang, F. S.; Hutton, R.; Zou, Y.; Wei, B.

    2018-03-01

    We calculate electron capture and ionization cross sections of N2 impacted by the H+ projectile at keV energies. To this end, we employ the time-dependent density-functional theory coupled nonadiabatically to molecular dynamics. To avoid the explicit treatment of the complex density matrix in the calculation of cross sections, we propose an approximate method based on the assumption of constant ionization rate over the period of the projectile passing the absorbing boundary. Our results agree reasonably well with experimental data and semi-empirical results within the measurement uncertainties in the considered energy range. The discrepancies are mainly attributed to the inadequate description of exchange-correlation functional and the crude approximation for constant ionization rate. Although the present approach does not predict the experiments quantitatively for collision energies below 10 keV, it is still helpful to calculate total cross sections of ion-molecule collisions within a certain energy range.

  9. Minimal-assumption inference from population-genomic data

    NASA Astrophysics Data System (ADS)

    Weissman, Daniel; Hallatschek, Oskar

    Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.

  10. An infinite branching hierarchy of time-periodic solutions of the Benjamin-Ono equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkening, Jon

    2008-07-01

    We present a new representation of solutions of the Benjamin-Ono equation that are periodic in space and time. Up to an additive constant and a Galilean transformation, each of these solutions is a previously known, multi-periodic solution; however, the new representation unifies the subset of such solutions with a fixed spatial period and a continuously varying temporal period into a single network of smooth manifolds connected together by an infinite hierarchy of bifurcations. Our representation explicitly describes the evolution of the Fourier modes of the solution as well as the particle trajectories in a meromorphic representation of these solutions; therefore,more » we have also solved the problem of finding periodic solutions of the ordinary differential equation governing these particles, including a description of a bifurcation mechanism for adding or removing particles without destroying periodicity. We illustrate the types of bifurcation that occur with several examples, including degenerate bifurcations not predicted by linearization about traveling waves.« less

  11. Sensitivity of peptide conformational dynamics on clustering of a classical molecular dynamics trajectory

    NASA Astrophysics Data System (ADS)

    Jensen, Christian H.; Nerukh, Dmitry; Glen, Robert C.

    2008-03-01

    We investigate the sensitivity of a Markov model with states and transition probabilities obtained from clustering a molecular dynamics trajectory. We have examined a 500ns molecular dynamics trajectory of the peptide valine-proline-alanine-leucine in explicit water. The sensitivity is quantified by varying the boundaries of the clusters and investigating the resulting variation in transition probabilities and the average transition time between states. In this way, we represent the effect of clustering using different clustering algorithms. It is found that in terms of the investigated quantities, the peptide dynamics described by the Markov model is sensitive to the clustering; in particular, the average transition times are found to vary up to 46%. Moreover, inclusion of nonphysical sparsely populated clusters can lead to serious errors of up to 814%. In the investigation, the time step used in the transition matrix is determined by the minimum time scale on which the system behaves approximately Markovian. This time step is found to be about 100ps. It is concluded that the description of peptide dynamics with transition matrices should be performed with care, and that using standard clustering algorithms to obtain states and transition probabilities may not always produce reliable results.

  12. Accounting for immunoprecipitation efficiencies in the statistical analysis of ChIP-seq data.

    PubMed

    Bao, Yanchun; Vinciotti, Veronica; Wit, Ernst; 't Hoen, Peter A C

    2013-05-30

    ImmunoPrecipitation (IP) efficiencies may vary largely between different antibodies and between repeated experiments with the same antibody. These differences have a large impact on the quality of ChIP-seq data: a more efficient experiment will necessarily lead to a higher signal to background ratio, and therefore to an apparent larger number of enriched regions, compared to a less efficient experiment. In this paper, we show how IP efficiencies can be explicitly accounted for in the joint statistical modelling of ChIP-seq data. We fit a latent mixture model to eight experiments on two proteins, from two laboratories where different antibodies are used for the two proteins. We use the model parameters to estimate the efficiencies of individual experiments, and find that these are clearly different for the different laboratories, and amongst technical replicates from the same lab. When we account for ChIP efficiency, we find more regions bound in the more efficient experiments than in the less efficient ones, at the same false discovery rate. A priori knowledge of the same number of binding sites across experiments can also be included in the model for a more robust detection of differentially bound regions among two different proteins. We propose a statistical model for the detection of enriched and differentially bound regions from multiple ChIP-seq data sets. The framework that we present accounts explicitly for IP efficiencies in ChIP-seq data, and allows to model jointly, rather than individually, replicates and experiments from different proteins, leading to more robust biological conclusions.

  13. Scaling and efficiency determine the irreversible evolution of a market

    PubMed Central

    Baldovin, F.; Stella, A. L.

    2007-01-01

    In setting up a stochastic description of the time evolution of a financial index, the challenge consists in devising a model compatible with all stylized facts emerging from the analysis of financial time series and providing a reliable basis for simulating such series. Based on constraints imposed by market efficiency and on an inhomogeneous-time generalization of standard simple scaling, we propose an analytical model which accounts simultaneously for empirical results like the linear decorrelation of successive returns, the power law dependence on time of the volatility autocorrelation function, and the multiscaling associated to this dependence. In addition, our approach gives a justification and a quantitative assessment of the irreversible character of the index dynamics. This irreversibility enters as a key ingredient in a novel simulation strategy of index evolution which demonstrates the predictive potential of the model.

  14. A Collection of Technical Studies Completed for the Computer-Aided Acquisition and Logistic Support (CALS) Program Fiscal Year 1987. Volume 2

    DTIC Science & Technology

    1988-03-01

    short description of how the TOP-CGM profile differs from the full CG.I standard. This change, along with explicitly pulling out the Conformance and...the CGI/CGEM segmentation model provides such capability. 3 t. Goali and Dujgn Cricr-s The segment model of CGEM is to meet the following criteria: I

  15. Composing Effective Environments for Concept Exploration in a Multi-Agency Context

    DTIC Science & Technology

    2011-01-01

    2004. Culture and Psychology . Belmont, CA: Wadsworth. McCown, M.M. 2005. Strategic Gaming for the National Security Community. Joint Force Quarterly...4, No 3 and systems, and policy amongst all of these organizations requires a form of interaction to make explicit, the implicit and cultural ...descriptions should be reviewed by independent advisers to ensure they are understandable, comprehensive, and concise. Emotionally or culturally

  16. Learning to Mean in Spanish Writing: A Case Study of a Genre-Based Pedagogy for Standards-Based Writing Instruction

    ERIC Educational Resources Information Center

    Troyan, Francis J.

    2016-01-01

    This case study reports the results of a genre-based approach, which was used to explicitly teach the touristic landmark description to fourth-grade students of Spanish as a foreign language. The instructional model and unit of instruction were informed by the pedagogies of the Sydney School of Linguistics and an instructional model for…

  17. Static Wormholes in Vacuum and Gravity in Diverse Dimensions

    NASA Astrophysics Data System (ADS)

    Susskind, Leonard

    If the observable universe really is a hologram, then of what sort? Is it rich enough to keep track of an eternally inflating multiverse? What physical and mathematical principles underlie it? Is the hologram a lower dimensional quantum field theory, and if so, how many dimensions are explicit, and how many "emerge?" Does the Holographic description provide clues for defining a probability measure on the Landscape?

  18. Adaptability in linkage of soil carbon nutrient cycles - the SEAM model

    NASA Astrophysics Data System (ADS)

    Wutzler, Thomas; Zaehle, Sönke; Schrumpf, Marion; Ahrens, Bernhard; Reichstein, Markus

    2017-04-01

    In order to understand the coupling of carbon (C) and nitrogen (N) cycles, it is necessary to understand C and N-use efficiencies of microbial soil organic matter (SOM) decomposition. While important controls of those efficiencies by microbial community adaptations have been shown at the scale of a soil pore, an abstract simplified representation of community adaptations is needed at ecosystem scale. Therefore we developed the soil enzyme allocation model (SEAM), which takes a holistic, partly optimality based approach to describe C and N dynamics at the spatial scale of an ecosystem and time-scales of years and longer. We explicitly modelled community adaptation strategies of resource allocation to extracellular enzymes and enzyme limitations on SOM decomposition. Using SEAM, we explored whether alternative strategy-hypotheses can have strong effects on SOM and inorganic N cycling. Results from prototypical simulations and a calibration to observations of an intensive pasture site showed that the so-called revenue enzyme allocation strategy was most viable. This strategy accounts for microbial adaptations to both, stoichiometry and amount of different SOM resources, and supported the largest microbial biomass under a wide range of conditions. Predictions of the SEAM model were qualitatively similar to models explicitly representing competing microbial groups. With adaptive enzyme allocation under conditions of high C/N ratio of litter inputs, N in formerly locked in slowly degrading SOM pools was made accessible, whereas with high N inputs, N was sequestered in SOM and protected from leaching. The finding that adaptation in enzyme allocation changes C and N-use efficiencies of SOM decomposition implies that concepts of C-nutrient cycle interactions should take account for the effects of such adaptations. This can be done using a holistic optimality approach.

  19. Progress in development of HEDP capabilities in FLASH's Unsplit Staggered Mesh MHD solver

    NASA Astrophysics Data System (ADS)

    Lee, D.; Xia, G.; Daley, C.; Dubey, A.; Gopal, S.; Graziani, C.; Lamb, D.; Weide, K.

    2011-11-01

    FLASH is a publicly available astrophysical community code designed to solve highly compressible multi-physics reactive flows. We are adding capabilities to FLASH that will make it an open science code for the academic HEDP community. Among many important numerical requirements, we consider the following features to be important components necessary to meet our goals for FLASH as an HEDP open toolset. First, we are developing computationally efficient time-stepping integration methods that overcome the stiffness that arises in the equations describing a physical problem when there are disparate time scales. To this end, we are adding two different time-stepping schemes to FLASH that relax the time step limit when diffusive effects are present: an explicit super-time-stepping algorithm (Alexiades et al. in Com. Num. Mech. Eng. 12:31-42, 1996) and a Jacobian-Free Newton-Krylov implicit formulation. These two methods will be integrated into a robust, efficient, and high-order accurate Unsplit Staggered Mesh MHD (USM) solver (Lee and Deane in J. Comput. Phys. 227, 2009). Second, we have implemented an anisotropic Spitzer-Braginskii conductivity model to treat thermal heat conduction along magnetic field lines. Finally, we are implementing the Biermann Battery term to account for spontaneous generation of magnetic fields in the presence of non-parallel temperature and density gradients.

  20. Interpolation by fast Wigner transform for rapid calculations of magnetic resonance spectra from powders.

    PubMed

    Stevensson, Baltzar; Edén, Mattias

    2011-03-28

    We introduce a novel interpolation strategy, based on nonequispaced fast transforms involving spherical harmonics or Wigner functions, for efficient calculations of powder spectra in (nuclear) magnetic resonance spectroscopy. The fast Wigner transform (FWT) interpolation operates by minimizing the time-consuming calculation stages, by sampling over a small number of Gaussian spherical quadrature (GSQ) orientations that are exploited to determine the spectral frequencies and amplitudes from a 10-70 times larger GSQ set. This results in almost the same orientational averaging accuracy as if the expanded grid was utilized explicitly in an order of magnitude slower computation. FWT interpolation is applicable to spectral simulations involving any time-independent or time-dependent and noncommuting spin Hamiltonian. We further show that the merging of FWT interpolation with the well-established ASG procedure of Alderman, Solum and Grant [J. Chem. Phys. 134, 3717 (1986)] speeds up simulations by 2-7 times relative to using ASG alone (besides greatly extending its scope of application), and between 1-2 orders of magnitude compared to direct orientational averaging in the absence of interpolation. Demonstrations of efficient spectral simulations are given for several magic-angle spinning scenarios in NMR, encompassing half-integer quadrupolar spins and homonuclear dipolar-coupled (13)C systems.

Top