Sample records for time steps larger

  1. Biomechanical influences on balance recovery by stepping.

    PubMed

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  2. Planning for Growth.

    ERIC Educational Resources Information Center

    Astle, Judy Hughes

    2001-01-01

    A summer camp expanded into year-round operation one step at a time. Initial steps included identifying the camp mission, history, and assets. Successive steps became larger and included expanding the program within the mission, increasing marketing efforts, developing natural resources, creating plans for maintenance and improvements, and…

  3. On improving the iterative convergence properties of an implicit approximate-factorization finite difference algorithm. [considering transonic flow

    NASA Technical Reports Server (NTRS)

    Desideri, J. A.; Steger, J. L.; Tannehill, J. C.

    1978-01-01

    The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.

  4. Optimal Runge-Kutta Schemes for High-order Spatial and Temporal Discretizations

    DTIC Science & Technology

    2015-06-01

    using larger time steps versus lower-order time integration with smaller time steps.4 In the present work, an attempt is made to gener - alize these... generality and because of interest in multi-speed and high Reynolds number, wall-bounded flow regimes, a dual-time framework is adopted in the present work...errors of general combinations of high-order spatial and temporal discretizations. Different Runge-Kutta time integrators are applied to central

  5. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  6. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  7. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory

    NASA Astrophysics Data System (ADS)

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-01

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  8. Nonadiabatic Dynamics in Single-Electron Tunneling Devices with Time-Dependent Density-Functional Theory.

    PubMed

    Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole

    2018-04-13

    We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.

  9. Accelerating Time Integration for the Shallow Water Equations on the Sphere Using GPUs

    DOE PAGES

    Archibald, R.; Evans, K. J.; Salinger, A.

    2015-06-01

    The push towards larger and larger computational platforms has made it possible for climate simulations to resolve climate dynamics across multiple spatial and temporal scales. This direction in climate simulation has created a strong need to develop scalable timestepping methods capable of accelerating throughput on high performance computing. This study details the recent advances in the implementation of implicit time stepping of the spectral element dynamical core within the United States Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) on graphical processing units (GPU) based machines. We demonstrate how solvers in the Trilinos project are interfaced with ACMEmore » and GPU kernels to increase computational speed of the residual calculations in the implicit time stepping method for the atmosphere dynamics. We demonstrate the optimization gains and data structure reorganization that facilitates the performance improvements.« less

  10. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    NASA Astrophysics Data System (ADS)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  11. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  12. Evaluation of the Actuator Line Model with coarse resolutions

    NASA Astrophysics Data System (ADS)

    Draper, M.; Usera, G.

    2015-06-01

    The aim of the present paper is to evaluate the Actuator Line Model (ALM) in spatial resolutions coarser than what is generally recommended, also using larger time steps. To accomplish this, the ALM has been implemented in the open source code caffa3d.MBRi and validated against experimental measurements of two wind tunnel campaigns (stand alone wind turbine and two wind turbines in line, case A and B respectively), taking into account two spatial resolutions: R/8 and R/15 (R is the rotor radius). A sensitivity analysis in case A was performed in order to get some insight into the influence of the smearing factor (3D Gaussian distribution) and time step size in power and thrust, as well as in the wake, without applying a tip loss correction factor (TLCF), for one tip speed ratio (TSR). It is concluded that as the smearing factor is larger or time step size is smaller the power is increased, but the velocity deficit is not as much affected. From this analysis, a smearing factor was obtained in order to calculate precisely the power coefficient for that TSR without applying TLCF. Results with this approach were compared with another simulation choosing a larger smearing factor and applying Prandtl's TLCF, for three values of TSR. It is found that applying the TLCF improves the power estimation and weakens the influence of the smearing factor. Finally, these 2 alternatives were tested in case B, confirming that conclusion.

  13. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  14. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  15. Effect of Time and Temperature on Transformation Toughened Zirconias.

    DTIC Science & Technology

    1987-06-01

    room temperature. High temperature mechanical tests performed vere stress rupture and stepped temperature stress rupture. The results of the tests...tetragonal precipitates will spontaneously transform to the monoclinic phae due to the lattice mismatch stress if they become larger than about 0.2 on, with...specimens, including fast fracture and fracture toughness testing. High temper- ture testing consisting of stress rupture and stepped temperature stress

  16. Extension of a streamwise upwind algorithm to a moving grid system

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.

    1990-01-01

    A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.

  17. Validity of the Instrumented Push and Release Test to Quantify Postural Responses in Persons With Multiple Sclerosis.

    PubMed

    El-Gohary, Mahmoud; Peterson, Daniel; Gera, Geetanjali; Horak, Fay B; Huisinga, Jessie M

    2017-07-01

    To test the validity of wearable inertial sensors to provide objective measures of postural stepping responses to the push and release clinical test in people with multiple sclerosis. Cross-sectional study. University medical center balance disorder laboratory. Total sample N=73; persons with multiple sclerosis (PwMS) n=52; healthy controls n=21. Stepping latency, time and number of steps required to reach stability, and initial step length were calculated using 3 inertial measurement units placed on participants' lumbar spine and feet. Correlations between inertial sensor measures and measures obtained from the laboratory-based systems were moderate to strong and statistically significant for all variables: time to release (r=.992), latency (r=.655), time to stability (r=.847), time of first heel strike (r=.665), number of steps (r=.825), and first step length (r=.592). Compared with healthy controls, PwMS demonstrated a longer time to stability and required a larger number of steps to reach stability. The instrumented push and release test is a valid measure of postural responses in PwMS and could be used as a clinical outcome measures for patient care decisions or for clinical trials aimed at improving postural control in PwMS. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  18. A Computational Approach to Increase Time Scales in Brownian Dynamics–Based Reaction-Diffusion Modeling

    PubMed Central

    Frazier, Zachary

    2012-01-01

    Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237

  19. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  20. Step-height measurement with a low coherence interferometer using continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Jian, Zhang; Suzuki, Takamasa; Choi, Samuel; Sasaki, Osami

    2013-12-01

    With the development of electronic technology in recent years, electronic components become increasingly miniaturized. At the same time a more accurate measurement method becomes indispensable. In the current measurement of nano-level, the Michelson interferometer with the laser diode is widely used, the method can measure the object accurately without touching the object. However it can't measure the step height that is larger than the half-wavelength. In this study, we improve the conventional Michelson interferometer by using a super luminescent diode and continuous wavelet transform, which can detect the time that maximizes the amplitude of the interference signal. We can accurately measure the surface-position of the object with this time. The method used in this experiment measured the step height of 20 microns.

  1. Step-to-step spatiotemporal variables and ground reaction forces of intra-individual fastest sprinting in a single session.

    PubMed

    Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo

    2018-06-01

    We aimed to investigate the step-to-step spatiotemporal variables and ground reaction forces during the acceleration phase for characterising intra-individual fastest sprinting within a single session. Step-to-step spatiotemporal variables and ground reaction forces produced by 15 male athletes were measured over a 50-m distance during repeated (three to five) 60-m sprints using a long force platform system. Differences in measured variables between the fastest and slowest trials were examined at each step until the 22nd step using a magnitude-based inferences approach. There were possibly-most likely higher running speed and step frequency (2nd to 22nd steps) and shorter support time (all steps) in the fastest trial than in the slowest trial. Moreover, for the fastest trial there were likely-very likely greater mean propulsive force during the initial four steps and possibly-very likely larger mean net anterior-posterior force until the 17th step. The current results demonstrate that better sprinting performance within a single session is probably achieved by 1) a high step frequency (except the initial step) with short support time at all steps, 2) exerting a greater mean propulsive force during initial acceleration, and 3) producing a greater mean net anterior-posterior force during initial and middle acceleration.

  2. Manufacturing Steps for Commercial Production of Nano-Structure Capacitors Final Report CRADA No. TC02159.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbee, T. W.; Schena, D.

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and TroyCap LLC, to develop manufacturing steps for commercial production of nano-structure capacitors. The technical objective of this project was to demonstrate high deposition rates of selected dielectric materials which are 2 to 5 times larger than typical using current technology.

  3. Time-asymptotic solutions of the Navier-Stokes equation for free shear flows using an alternating-direction implicit method

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Morris, D. J.

    1976-01-01

    An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.

  4. A particle-in-cell method for the simulation of plasmas based on an unconditionally stable field solver

    DOE PAGES

    Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...

    2016-08-09

    Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less

  5. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  6. The iterative thermal emission method: A more implicit modification of IMC

    NASA Astrophysics Data System (ADS)

    Long, A. R.; Gentile, N. A.; Palmer, T. S.

    2014-11-01

    For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of ;pseudo-scattering; introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in that they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem. We have developed a variant of IMC called iterative thermal emission IMC, which is designed to have a reduced parameter space in which the maximum principle is violated. ITE IMC is a more implicit version of IMC in that it uses the information obtained from a series of IMC photon histories to improve the estimate for the end of time step material temperature during a time step. A better estimate of the end of time step material temperature allows for a more implicit estimate of other temperature-dependent quantities: opacity, heat capacity, Fleck factor (probability that a photon absorbed during a time step is not reemitted) and the Planckian emission source. We have verified the ITE IMC method against 0-D and 1-D analytic solutions and problems from the literature. These results are compared with traditional IMC. We perform an infinite medium stability analysis of ITE IMC and show that it is slightly more numerically stable than traditional IMC. We find that significantly larger time steps can be used with ITE IMC without violating the maximum principle, especially in problems with non-linear material properties. The ITE IMC method does however yield solutions with larger variance because each sub-step uses a different Fleck factor (even at equilibrium).

  7. The iterative thermal emission method: A more implicit modification of IMC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, A.R., E-mail: arlong.ne@tamu.edu; Gentile, N.A.; Palmer, T.S.

    2014-11-15

    For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of “pseudo-scattering” introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in thatmore » they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem. We have developed a variant of IMC called iterative thermal emission IMC, which is designed to have a reduced parameter space in which the maximum principle is violated. ITE IMC is a more implicit version of IMC in that it uses the information obtained from a series of IMC photon histories to improve the estimate for the end of time step material temperature during a time step. A better estimate of the end of time step material temperature allows for a more implicit estimate of other temperature-dependent quantities: opacity, heat capacity, Fleck factor (probability that a photon absorbed during a time step is not reemitted) and the Planckian emission source. We have verified the ITE IMC method against 0-D and 1-D analytic solutions and problems from the literature. These results are compared with traditional IMC. We perform an infinite medium stability analysis of ITE IMC and show that it is slightly more numerically stable than traditional IMC. We find that significantly larger time steps can be used with ITE IMC without violating the maximum principle, especially in problems with non-linear material properties. The ITE IMC method does however yield solutions with larger variance because each sub-step uses a different Fleck factor (even at equilibrium)« less

  8. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  9. Self-consistent predictor/corrector algorithms for stable and efficient integration of the time-dependent Kohn-Sham equation

    NASA Astrophysics Data System (ADS)

    Zhu, Ying; Herbert, John M.

    2018-01-01

    The "real time" formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.

  10. Simulation methods with extended stability for stiff biochemical Kinetics.

    PubMed

    Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin

    2010-08-11

    With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

  11. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  12. Asymmetry in Determinants of Running Speed During Curved Sprinting.

    PubMed

    Ishimura, Kazuhiro; Sakurai, Shinji

    2016-08-01

    This study investigates the potential asymmetries between inside and outside legs in determinants of curved running speed. To test these asymmetries, a deterministic model of curved running speed was constructed based on components of step length and frequency, including the distances and times of different step phases, takeoff speed and angle, velocities in different directions, and relative height of the runner's center of gravity. Eighteen athletes sprinted 60 m on the curved path of a 400-m track; trials were recorded using a motion-capture system. The variables were calculated following the deterministic model. The average speeds were identical between the 2 sides; however, the step length and frequency were asymmetric. In straight sprinting, there is a trade-off relationship between the step length and frequency; however, such a trade-off relationship was not observed in each step of curved sprinting in this study. Asymmetric vertical velocity at takeoff resulted in an asymmetric flight distance and time. The runners changed the running direction significantly during the outside foot stance because of the asymmetric centripetal force. Moreover, the outside leg had a larger tangential force and shorter stance time. These asymmetries between legs indicated the outside leg plays an important role in curved sprinting.

  13. Solar forcing of the stream flow of a continental scale South American river.

    PubMed

    Mauas, Pablo J D; Flamenco, Eduardo; Buccino, Andrea P

    2008-10-17

    Solar forcing on climate has been reported in several studies although the evidence so far remains inconclusive. Here, we analyze the stream flow of one of the largest rivers in the world, the Paraná in southeastern South America. For the last century, we find a strong correlation with the sunspot number, in multidecadal time scales, and with larger solar activity corresponding to larger stream flow. The correlation coefficient is r=0.78, significant to a 99% level. In shorter time scales we find a strong correlation with El Niño. These results are a step toward flood prediction, which might have great social and economic impacts.

  14. Molecular simulation of small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2012-11-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  15. An exact and efficient first passage time algorithm for reaction-diffusion processes on a 2D-lattice

    NASA Astrophysics Data System (ADS)

    Bezzola, Andri; Bales, Benjamin B.; Alkire, Richard C.; Petzold, Linda R.

    2014-01-01

    We present an exact and efficient algorithm for reaction-diffusion-nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for large ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.

  16. An exact and efficient first passage time algorithm for reaction–diffusion processes on a 2D-lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezzola, Andri, E-mail: andri.bezzola@gmail.com; Bales, Benjamin B., E-mail: bbbales2@gmail.com; Alkire, Richard C., E-mail: r-alkire@uiuc.edu

    2014-01-01

    We present an exact and efficient algorithm for reaction–diffusion–nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for largemore » ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.« less

  17. Muscle activities used by young and old adults when stepping to regain balance during a forward fall.

    PubMed

    Thelen, D G; Muriuki, M; James, J; Schultz, A B; Ashton-Miller, J A; Alexander, N B

    2000-04-01

    The current study was undertaken to determine if age-related differences in muscle activities might relate to older adults being significantly less able than young adults to recover balance during a forward fall. Fourteen young and twelve older healthy males were released from forward leans of various magnitudes and asked to regain standing balance by taking a single forward step. Myoelectric signals were recorded from 12 lower extremity muscles and processed to compare the muscle activation patterns of young and older adults. Young adults successfully recovered from significantly larger leans than older adults using a single step (32.2 degrees vs. 23.5 degrees ). Muscular latency times, the time between release and activity onset, ranged from 73 to 114 ms with no significant age-related differences in the shortest muscular latency times. The overall response muscular activation patterns were similar for young and older adults. However older adults were slower to deactivate three stance leg muscles and also demonstrated delays in activating the step leg hip flexors and knee extensors prior to and during the swing phase. In the forward fall paradigm studied, age-differences in balance recovery performance do not seem due to slowness in response onset but may relate to differences in muscle activation timing during the stepping movement.

  18. Stepped-to-dart Leaders in Cloud-to-ground Lightning

    NASA Astrophysics Data System (ADS)

    Stolzenburg, M.; Marshall, T. C.; Karunarathne, S.; Karunarathna, N.; Warner, T.; Orville, R. E.

    2013-12-01

    Using time-correlated high-speed video (50,000 frames per second) and fast electric field change (5 MegaSamples per second) data for lightning flashes in East-central Florida, we describe an apparently rare type of subsequent leader: a stepped leader that finds and follows a previously used channel. The observed 'stepped-to-dart leaders' occur in three natural negative ground flashes. Stepped-to-dart leader connection altitudes are 3.3, 1.6 and 0.7 km above ground in the three cases. Prior to the stepped-to-dart connection, the advancing leaders have properties typical of stepped leaders. After the connection, the behavior changes almost immediately (within 40-60 us) to dart or dart-stepped leader, with larger amplitude E-change pulses and faster average propagation speeds. In this presentation, we will also describe the upward luminosity after the connection in the prior return stroke channel and in the stepped leader path, along with properties of the return strokes and other leaders in the three flashes.

  19. Variability of Anticipatory Postural Adjustments During Gait Initiation in Individuals With Parkinson Disease.

    PubMed

    Lin, Cheng-Chieh; Creath, Robert A; Rogers, Mark W

    2016-01-01

    In people with Parkinson disease (PD), difficulties with initiating stepping may be related to impairments of anticipatory postural adjustments (APAs). Increased variability in step length and step time has been observed in gait initiation in individuals with PD. In this study, we investigated whether the ability to generate consistent APAs during gait initiation is compromised in these individuals. Fifteen subjects with PD and 8 healthy control subjects were instructed to take rapid forward steps after a verbal cue. The changes in vertical force and ankle marker position were recorded via force platforms and a 3-dimensional motion capture system, respectively. Means, standard deviations, and coefficients of variation of both timing and magnitude of vertical force, as well as stepping variables, were calculated. During the postural phase of gait initiation the interval was longer and the force modulation was smaller in subjects with PD. Both the variability of timing and force modulation were larger in subjects with PD. Individuals with PD also had a longer time to complete the first step, but no significant differences were found for the variability of step time, length, and speed between groups. The increased variability of APAs during gait initiation in subjects with PD could affect posture-locomotion coupling, and lead to start hesitation, and even falls. Future studies are needed to investigate the effect of rehabilitation interventions on the variability of APAs during gait initiation in individuals with PD.Video abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A119).

  20. On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation

    NASA Astrophysics Data System (ADS)

    Qian, ZhanSen; Lee, Chun-Hian

    2012-08-01

    A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.

  1. Molecular dynamics at low time resolution.

    PubMed

    Faccioli, P

    2010-10-28

    The internal dynamics of macromolecular systems is characterized by widely separated time scales, ranging from fraction of picoseconds to nanoseconds. In ordinary molecular dynamics simulations, the elementary time step Δt used to integrate the equation of motion needs to be chosen much smaller of the shortest time scale in order not to cut-off physical effects. We show that in systems obeying the overdamped Langevin equation, it is possible to systematically correct for such discretization errors. This is done by analytically averaging out the fast molecular dynamics which occurs at time scales smaller than Δt, using a renormalization group based technique. Such a procedure gives raise to a time-dependent calculable correction to the diffusion coefficient. The resulting effective Langevin equation describes by construction the same long-time dynamics, but has a lower time resolution power, hence it can be integrated using larger time steps Δt. We illustrate and validate this method by studying the diffusion of a point-particle in a one-dimensional toy model and the denaturation of a protein.

  2. High-Speed Photographic Study of Wave Propagation and Impact Damage in Transparent Laminates

    DTIC Science & Technology

    2008-04-01

    23 8.2 Generating Optimized Power Diagrams for FEM Analysis...dimensions which removes short edges (and areas) such that a larger time step in a subsequent FEM analysis can be used...a zone where most contacts have already failed.............32 Figure 47. Insufficiency of generic FEM approaches. A steel impactor hits an AlON

  3. Efficient variable time-stepping scheme for intense field-atom interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Kosloff, R.

    1993-03-01

    The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less

  4. Quick foot placement adjustments during gait are less accurate in individuals with focal cerebellar lesions.

    PubMed

    Hoogkamer, Wouter; Potocanac, Zrinka; Van Calenbergh, Frank; Duysens, Jacques

    2017-10-01

    Online gait corrections are frequently used to restore gait stability and prevent falling. They require shorter response times than voluntary movements which suggests that subcortical pathways contribute to the execution of online gait corrections. To evaluate the potential role of the cerebellum in these pathways we tested the hypotheses that online gait corrections would be less accurate in individuals with focal cerebellar damage than in neurologically intact controls and that this difference would be more pronounced for shorter available response times and for short step gait corrections. We projected virtual stepping stones on an instrumented treadmill while some of the approaching stepping stones were shifted forward or backward, requiring participants to adjust their foot placement. Varying the timing of those shifts allowed us to address the effect of available response time on foot placement error. In agreement with our hypothesis, individuals with focal cerebellar lesions were less accurate in adjusting their foot placement in reaction to suddenly shifted stepping stones than neurologically intact controls. However, the cerebellar lesion group's foot placement error did not increase more with decreasing available response distance or for short step versus long step adjustments compared to the control group. Furthermore, foot placement error for the non-shifting stepping stones was also larger in the cerebellar lesion group as compared to the control group. Consequently, the reduced ability to accurately adjust foot placement during walking in individuals with focal cerebellar lesions appears to be a general movement control deficit, which could contribute to increased fall risk. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs

    NASA Astrophysics Data System (ADS)

    Niemeyer, Kyle E.; Sung, Chih-Jen

    2014-01-01

    The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge-Kutta-Cash-Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge-Kutta-Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the methane mechanism, RKC-GPU performed more than 65 and 11 times faster, for problem sizes consisting of 131,072 ODEs and larger, than the single- and six-core RKC-CPU versions, and up to 57 times faster than the six-core CPU-based implicit VODE algorithm on 65,536 ODEs. In the presence of more severe stiffness, such as ethylene oxidation (111 species and 1566 irreversible reactions), RKC-GPU performed more than 17 times faster than RKC-CPU on six cores for 32,768 ODEs and larger, and at best 4.5 times faster than VODE on six CPU cores for 65,536 ODEs. With a larger time step size, RKC-GPU performed at best 2.5 times slower than six-core VODE for 8192 ODEs and larger. Therefore, the need for developing new strategies for integrating stiff chemistry on GPUs was discussed.

  6. Effect of Tai Chi Training on Dual-Tasking Performance That Involves Stepping Down among Stroke Survivors: A Pilot Study.

    PubMed

    Chan, Wing-Nga; Tsang, William Wai-Nam

    2017-01-01

    Descending stairs demands attention and neuromuscular control, especially with dual-tasking. Studies have demonstrated that stroke often degrades a survivor's ability to descend stairs. Tai Chi has been shown to improve dual-tasking performance of healthy older adults, but no such study has been conducted in stroke survivors. This study investigated the effect of Tai Chi training on dual-tasking performance that involved stepping down and compared it with that of conventional exercise among stroke survivors. Subjects were randomized into Tai Chi ( n = 9), conventional exercise ( n = 8), and control ( n = 9) groups. Those in the former two groups received 12-week training. Assessments included auditory Stroop test, stepping down test, and dual-tasking test involving both simultaneously. They were evaluated before training (time-1), after training (time-2), and one month after training (time-3). Tai Chi group showed significant improvement in the auditory Stroop test from time-1 to time-3 and the performance was significantly better than that of the conventional exercise group in time-3. No significant effect was found in the stepping down task or dual-tasking in the control group. These results suggest a beneficial effect of Tai Chi training on cognition among stroke survivors without compromising physical task performance in dual-tasking. The effect was better than the conventional exercise group. Nevertheless, further research with a larger sample is warranted.

  7. A diffusive information preservation method for small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2013-06-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  8. Inverting pump-probe spectroscopy for state tomography of excitonic systems.

    PubMed

    Hoyer, Stephan; Whaley, K Birgitta

    2013-04-28

    We propose a two-step protocol for inverting ultrafast spectroscopy experiments on a molecular aggregate to extract the time-evolution of the excited state density matrix. The first step is a deconvolution of the experimental signal to determine a pump-dependent response function. The second step inverts this response function to obtain the quantum state of the system, given a model for how the system evolves following the probe interaction. We demonstrate this inversion analytically and numerically for a dimer model system, and evaluate the feasibility of scaling it to larger molecular aggregates such as photosynthetic protein-pigment complexes. Our scheme provides a direct alternative to the approach of determining all Hamiltonian parameters and then simulating excited state dynamics.

  9. Assessment of input-output properties and control of neuroprosthetic hand grasp.

    PubMed

    Hines, A E; Owens, N E; Crago, P E

    1992-06-01

    Three tests have been developed to evaluate rapidly and quantitatively the input-output properties and patient control of neuroprosthetic hand grasp. Each test utilizes a visual pursuit tracking task during which the subject controls the grasp force and grasp opening (position) of the hand. The first test characterizes the static input-output properties of the hand grasp, where the input is a slowly changing patient generated command signal and the outputs are grasp force and grasp opening. Nonlinearities and inappropriate slopes have been documented in these relationships, and in some instances the need for system returning has been indicated. For each subject larger grasp forces were produced when grasping larger objects, and for some subjects the shapes of the relationships also varied with object size. The second test quantifies the ability of the subject to control the hand grasp outputs while tracking steps and ramps. Neuroprosthesis users had rms errors two to three times larger when tracking steps versus ramps, and had rms errors four to five times larger than normals when tracking ramps. The third test provides an estimate of the frequency response of the hand grasp system dynamics, from input and output data collected during a random tracking task. Transfer functions were estimated by spectral analysis after removal of the static input-output nonlinearities measured in the first test. The dynamics had low-pass filter characteristics with 3 dB cutoff frequencies from 1.0 to 1.4 Hz. The tests developed in this study provide a rapid evaluation of both the system and the user. They provide information to 1) help interpret subject performance of functional tasks, 2) evaluate the efficacy of system features such as closed-loop control, and 3) screen the neuroprosthesis to indicate the need for retuning.

  10. Trainer variability during step training after spinal cord injury: Implications for robotic gait-training device design.

    PubMed

    Galvez, Jose A; Budovitch, Amy; Harkema, Susan J; Reinkensmeyer, David J

    2011-01-01

    Robotic devices are being developed to automate repetitive aspects of walking retraining after neurological injuries, in part because they might improve the consistency and quality of training. However, it is unclear how inconsistent manual training actually is or whether stepping quality depends strongly on the trainers' manual skill. The objective of this study was to quantify trainer variability of manual skill during step training using body-weight support on a treadmill and assess factors of trainer skill. We attached a sensorized orthosis to one leg of each patient with spinal cord injury and measured the shank kinematics and forces exerted by different trainers during six training sessions. An expert trainer rated the trainers' skill level based on videotape recordings. Between-trainer force variability was substantial, about two times greater than within-trainer variability. Trainer skill rating correlated strongly with two gait features: better knee extension during stance and fewer episodes of toe dragging. Better knee extension correlated directly with larger knee horizontal assistance force, but better toe clearance did not correlate with larger ankle push-up force; rather, it correlated with better knee and hip extension. These results are useful to inform robotic gait-training design.

  11. Regional hydrologic response of loblolly pine to air temperature and precipitation changes

    Treesearch

    Steven G. McNulty; James M. Vose; Wayne T. Swank

    1997-01-01

    Large deviations in average annual air temperatures and total annual precipitation were observed across the Southern United States during the last 50 years, and these fluctuations could become even larger during the next century. The authors used PnET-IIS, a monthly time-step forest process model that uses soil, vegetation, and climate inputs to assess the influence of...

  12. A first-order k-space model for elastic wave propagation in heterogeneous media.

    PubMed

    Firouzi, K; Cox, B T; Treeby, B E; Saffari, N

    2012-09-01

    A pseudospectral model of linear elastic wave propagation is described based on the first order stress-velocity equations of elastodynamics. k-space adjustments to the spectral gradient calculations are derived from the dyadic Green's function solution to the second-order elastic wave equation and used to (a) ensure the solution is exact for homogeneous wave propagation for timesteps of arbitrarily large size, and (b) also allows larger time steps without loss of accuracy in heterogeneous media. The formulation in k-space allows the wavefield to be split easily into compressional and shear parts. A perfectly matched layer (PML) absorbing boundary condition was developed to effectively impose a radiation condition on the wavefield. The staggered grid, which is essential for accurate simulations, is described, along with other practical details of the implementation. The model is verified through comparison with exact solutions for canonical examples and further examples are given to show the efficiency of the method for practical problems. The efficiency of the model is by virtue of the reduced point-per-wavelength requirement, the use of the fast Fourier transform (FFT) to calculate the gradients in k space, and larger time steps made possible by the k-space adjustments.

  13. Design and characterization of an irradiation facility with real-time monitoring

    NASA Astrophysics Data System (ADS)

    Braisted, Jonathan David

    Radiation causes performance degradation in electronics by inducing atomic displacements and ionizations. While radiation hardened components are available, non-radiation hardened electronics can be preferable because they are generally more compact, require less power, and less expensive than radiation tolerant equivalents. It is therefore important to characterize the performance of electronics, both hardened and non-hardened, to prevent costly system or mission failures. Radiation effects tests for electronics generally involve a handful of step irradiations, leading to poorly-resolved data. Step irradiations also introduce uncertainties in electrical measurements due to temperature annealing effects. This effect may be intensified if the time between exposure and measurement is significant. Induced activity in test samples also complicates data collection of step irradiated test samples. The University of Texas at Austin operates a 1.1 MW Mark II TRIGA research reactor. An in-core irradiation facility for radiation effects testing with a real-time monitoring capability has been designed for the UT TRIGA reactor. The facility is larger than any currently available non-central location in a TRIGA, supporting testing of larger electronic components as well as other in-core irradiation applications requiring significant volume such as isotope production or neutron transmutation doping of silicon. This dissertation describes the design and testing of the large in-core irradiation facility and the experimental campaign developed to test the real-time monitoring capability. This irradiation campaign was performed to test the real-time monitoring capability at various reactor power levels. The device chosen for characterization was the 4N25 general-purpose optocoupler. The current transfer ratio, which is an important electrical parameter for optocouplers, was calculated as a function of neutron fluence and gamma dose from the real-time voltage measurements. The resultant radiation effects data was seen to be repeatable and exceptionally finely-resolved. Therefore, the capability at UT TRIGA has been proven competitive with world-class effects characterization facilities.

  14. Combining fast walking training and a step activity monitoring program to improve daily walking activity after stroke: a preliminary study

    PubMed Central

    Danks, Kelly A.; Pohlig, Ryan; Reisman, Darcy S.

    2016-01-01

    Objective To determine preliminary efficacy and to identify baseline characteristics predicting who would benefit most from fast walking training plus a step activity monitoring program (FAST+SAM) compared to fast walking training alone (FAST) in persons with chronic stroke. Design Randomized controlled trial with blinded assessors Setting Outpatient clinical research laboratory Participants 37 individuals greater than 6 months post-stroke. Interventions Subjects were assigned to either FAST which was walking training at their fastest possible speed on the treadmill (30 minutes) and over ground 3 times/week for 12 weeks or FAST plus a step activity monitoring program (FAST+SAM). The step activity monitoring program consisted of daily step monitoring with a StepWatch Activity monitor, goal setting, and identification of barriers to activity and strategies to overcome barriers. Main Outcome Measures Daily step activity metrics (steps/day, time walking/day), walking speed and six minute walk test distance (6MWT). Results There was a significant effect of time for both groups with all outcomes improving from pre to post-training, (all p<0.05). The FAST+SAM was superior to FAST for 6MWT (p=0.018), with a larger increase in the FAST+SAM group. The interventions had differential effectiveness based on baseline step activity. Sequential moderated regression models demonstrated that for subjects with baseline levels of step activity and 6MWT distances that were below the mean, the FAST+SAM intervention was more effective than FAST (1715±1584 vs. 254±933 steps/day, respectively; p<0.05 for overall model and ΔR2 for steps/day and 6MWT). Conclusions The addition of a step activity monitoring program to a fast walking training intervention may be most effective in persons with chronic stroke that have initial low levels of walking endurance and activity. Regardless of baseline performance, the FAST + SAM intervention was more effective for improving walking endurance. PMID:27240430

  15. Effects of wide step walking on swing phase hip muscle forces and spatio-temporal gait parameters.

    PubMed

    Bajelan, Soheil; Nagano, Hanatsu; Sparrow, Tony; Begg, Rezaul K

    2017-07-01

    Human walking can be viewed essentially as a continuum of anterior balance loss followed by a step that re-stabilizes balance. To secure balance an extended base of support can be assistive but healthy young adults tend to walk with relatively narrower steps compared to vulnerable populations (e.g. older adults and patients). It was, therefore, hypothesized that wide step walking may enhance dynamic balance at the cost of disturbed optimum coupling of muscle functions, leading to additional muscle work and associated reduction of gait economy. Young healthy adults may select relatively narrow steps for a more efficient gait. The current study focused on the effects of wide step walking on hip abductor and adductor muscles and spatio-temporal gait parameters. To this end, lower body kinematic data and ground reaction forces were obtained using an Optotrak motion capture system and AMTI force plates, respectively, while AnyBody software was employed for muscle force simulation. A single step of four healthy young male adults was captured during preferred walking and wide step walking. Based on preferred walking data, two parallel lines were drawn on the walkway to indicate 50% larger step width and participants targeted the lines with their heels as they walked. In addition to step width that defined walking conditions, other spatio-temporal gait parameters including step length, double support time and single support time were obtained. Average hip muscle forces during swing were modeled. Results showed that in wide step walking step length increased, Gluteus Minimus muscles were more active while Gracilis and Adductor Longus revealed considerably reduced forces. In conclusion, greater use of abductors and loss of adductor forces were found in wide step walking. Further validation is needed in future studies involving older adults and other pathological populations.

  16. Impurity effects in crystal growth from solutions: Steady states, transients and step bunch motion

    NASA Astrophysics Data System (ADS)

    Ranganathan, Madhav; Weeks, John D.

    2014-05-01

    We analyze a recently formulated model in which adsorbed impurities impede the motion of steps in crystals grown from solutions, while moving steps can remove or deactivate adjacent impurities. In this model, the chemical potential change of an atom on incorporation/desorption to/from a step is calculated for different step configurations and used in the dynamical simulation of step motion. The crucial difference between solution growth and vapor growth is related to the dependence of the driving force for growth of the main component on the size of the terrace in front of the step. This model has features resembling experiments in solution growth, which yields a dead zone with essentially no growth at low supersaturation and the motion of large coherent step bunches at larger supersaturation. The transient behavior shows a regime wherein steps bunch together and move coherently as the bunch size increases. The behavior at large line tension is reminiscent of the kink-poisoning mechanism of impurities observed in calcite growth. Our model unifies different impurity models and gives a picture of nonequilibrium dynamics that includes both steady states and time dependent behavior and shows similarities with models of disordered systems and the pinning/depinning transition.

  17. Stepwise molding, etching, and imprinting to form libraries of nanopatterned substrates.

    PubMed

    Zhao, Zhi; Cai, Yangjun; Liao, Wei-Ssu; Cremer, Paul S

    2013-06-04

    Herein, we describe a novel colloidal lithographic strategy for the stepwise patterning of planar substrates with numerous complex and unique designs. In conjunction with colloidal self-assembly, imprint molding, and capillary force lithography, reactive ion etching was used to create complex libraries of nanoscale features. This combinatorial strategy affords the ability to develop an exponentially increasing number of two-dimensional nanoscale patterns with each sequential step in the process. Specifically, dots, triangles, circles, and lines could be assembled on the surface separately and in combination with each other. Numerous architectures are obtained for the first time with high uniformity and reproducibility. These hexagonal arrays were made from polystyrene and gold features, whereby each surface element could be tuned from the micrometer size scale down to line widths of ~35 nm. The patterned area could be 1 cm(2) or even larger. The techniques described herein can be combined with further steps to make even larger libraries. Moreover, these polymer and metal features may prove useful in optical, sensing, and electronic applications.

  18. Explicit finite difference predictor and convex corrector with applications to hyperbolic partial differential equations

    NASA Technical Reports Server (NTRS)

    Dey, C.; Dey, S. K.

    1983-01-01

    An explicit finite difference scheme consisting of a predictor and a corrector has been developed and applied to solve some hyperbolic partial differential equations (PDEs). The corrector is a convex-type function which is applied at each time level and at each mesh point. It consists of a parameter which may be estimated such that for larger time steps the algorithm should remain stable and generate a fast speed of convergence to the steady-state solution. Some examples have been given.

  19. 30 min of treadmill walking at self-selected speed does not increase gait variability in independent elderly.

    PubMed

    Da Rocha, Emmanuel S; Kunzler, Marcos R; Bobbert, Maarten F; Duysens, Jacques; Carpes, Felipe P

    2018-06-01

    Walking is one of the preferred exercises among elderly, but could a prolonged walking increase gait variability, a risk factor for a fall in the elderly? Here we determine whether 30 min of treadmill walking increases coefficient of variation of gait in elderly. Because gait responses to exercise depend on fitness level, we included 15 sedentary and 15 active elderly. Sedentary participants preferred a lower gait speed and made smaller steps than the actives. Step length coefficient of variation decreased ~16.9% by the end of the exercise in both the groups. Stride length coefficient of variation decreased ~9% after 10 minutes of walking, and sedentary elderly showed a slightly larger step width coefficient of variation (~2%) at 10 min than active elderly. Active elderly showed higher walk ratio (step length/cadence) than sedentary in all times of walking, but the times did not differ in both the groups. In conclusion, treadmill gait kinematics differ between sedentary and active elderly, but changes over time are similar in sedentary and active elderly. As a practical implication, 30 min of walking might be a good strategy of exercise for elderly, independently of the fitness level, because it did not increase variability in step and stride kinematics, which is considered a risk of fall in this population.

  20. Geometric multigrid for an implicit-time immersed boundary method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.

    2014-10-12

    The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less

  1. Correlated Time-Variation of Asphalt Rheology and Bulk Microstructure

    NASA Astrophysics Data System (ADS)

    Ramm, Adam; Nazmus, Sakib; Bhasin, Amit; Downer, Michael

    We use noncontact optical microscopy and optical scattering in the visible and near-infrared spectrum on Performance Grade (PG) asphalt binder to confirm the existence of microstructures in the bulk. The number of visible microstructures increases linearly as penetration depth of the incident radiation increases, which verifies a uniform volume distribution of microstructures. We use dark field optical scatter in the near-infrared to measure the temperature dependent behavior of the bulk microstructures and compare this behavior with Dynamic Shear Rheometer (DSR) measurements of the bulk complex shear modulus | G* (T) | . The main findings are: (1) After reaching thermal equilibrium, both temperature dependent optical scatter intensity (I (T)) and bulk shear modulus (| G* (T) |) continue to change appreciably for times much greater than thermal equilibration times. (2) The hysteresis behavior during a complete temperature cycle seen in previous work derives from a larger time dependence in the cooling step compared with the heating step. (3) Different binder aging conditions show different thermal time-variations for both I (T) and | G* (T) | .

  2. A time-spectral approach to numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  3. Novel structure formation at the bottom surface of porous anodic alumina fabricated by single step anodization process.

    PubMed

    Ali, Ghafar; Ahmad, Maqsood; Akhter, Javed Iqbal; Maqbool, Muhammad; Cho, Sung Oh

    2010-08-01

    A simple approach for the growth of long-range highly ordered nanoporous anodic alumina film in H(2)SO(4) electrolyte through a single step anodization without any additional pre-anodizing procedure is reported. Free-standing porous anodic alumina film of 180 microm thickness with through hole morphology was obtained. A simple and single step process was used for the detachment of alumina from aluminum substrate. The effect of anodizing conditions, such as anodizing voltage and time on the pore diameter and pore ordering is discussed. The metal/oxide and oxide/electrolyte interfaces were examined by high resolution scanning transmission electron microscope. The arrangement of pores on metal/oxide interface was well ordered with smaller diameters than that of the oxide/electrolyte interface. The inter-pore distance was larger in metal/oxide interface as compared to the oxide/electrolyte interface. The size of the ordered domain was found to depend strongly upon anodizing voltage and time. (c) 2010 Elsevier Ltd. All rights reserved.

  4. Auditory Proprioceptive Integration: Effects of Real-Time Kinematic Auditory Feedback on Knee Proprioception

    PubMed Central

    Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.

    2018-01-01

    The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259

  5. Fast synthesis of transparent and hydrophobic silica aerogels using polyethoxydisiloxane and methyltrimethoxysilane in one-step drying process

    NASA Astrophysics Data System (ADS)

    Zhu, Xingqun; Naz, Hina; Nauman Ali, Rai; Yang, Yongfei; Zheng, Zhou; Xiang, Bin; Cui, Xudong

    2018-04-01

    We have successfully synthesized the transparent and hydrophobic silica aerogels by a one-step drying process using appropriate amount of Polyethoxydisiloxane and methyltrimethoxysilane. With an introduction of modified rapid supercritical extraction technique, the synthesis process time was shortened down to one hour for a 4 L solution reaction. The observed transmittance of as-synthesized product is larger than 80% within the wavelength range of 500–1000 nm, and the contact angle is confirmed to be over 135°. Our results provide a path way to the fast synthesis of hydrophobic and transparent aerogels in near future for window insulator applications.

  6. Efficiency improvement in proton dose calculations with an equivalent restricted stopping power formalism

    NASA Astrophysics Data System (ADS)

    Maneval, Daniel; Bouchard, Hugo; Ozell, Benoît; Després, Philippe

    2018-01-01

    The equivalent restricted stopping power formalism is introduced for proton mean energy loss calculations under the continuous slowing down approximation. The objective is the acceleration of Monte Carlo dose calculations by allowing larger steps while preserving accuracy. The fractional energy loss per step length ɛ was obtained with a secant method and a Gauss-Kronrod quadrature estimation of the integral equation relating the mean energy loss to the step length. The midpoint rule of the Newton-Cotes formulae was then used to solve this equation, allowing the creation of a lookup table linking ɛ to the equivalent restricted stopping power L eq, used here as a key physical quantity. The mean energy loss for any step length was simply defined as the product of the step length with L eq. Proton inelastic collisions with electrons were added to GPUMCD, a GPU-based Monte Carlo dose calculation code. The proton continuous slowing-down was modelled with the L eq formalism. GPUMCD was compared to Geant4 in a validation study where ionization processes alone were activated and a voxelized geometry was used. The energy straggling was first switched off to validate the L eq formalism alone. Dose differences between Geant4 and GPUMCD were smaller than 0.31% for the L eq formalism. The mean error and the standard deviation were below 0.035% and 0.038% respectively. 99.4 to 100% of GPUMCD dose points were consistent with a 0.3% dose tolerance. GPUMCD 80% falloff positions (R80 ) matched Geant’s R80 within 1 μm. With the energy straggling, dose differences were below 2.7% in the Bragg peak falloff and smaller than 0.83% elsewhere. The R80 positions matched within 100 μm. The overall computation times to transport one million protons with GPUMCD were 31-173 ms. Under similar conditions, Geant4 computation times were 1.4-20 h. The L eq formalism led to an intrinsic efficiency gain factor ranging between 30-630, increasing with the prescribed accuracy of simulations. The L eq formalism allows larger steps leading to a O(constant) algorithmic time complexity. It significantly accelerates Monte Carlo proton transport while preserving accuracy. It therefore constitutes a promising variance reduction technique for computing proton dose distributions in a clinical context.

  7. Surfactant-controlled polymerization of semiconductor clusters to quantum dots through competing step-growth and living chain-growth mechanisms.

    PubMed

    Evans, Christopher M; Love, Alyssa M; Weiss, Emily A

    2012-10-17

    This article reports control of the competition between step-growth and living chain-growth polymerization mechanisms in the formation of cadmium chalcogenide colloidal quantum dots (QDs) from CdSe(S) clusters by varying the concentration of anionic surfactant in the synthetic reaction mixture. The growth of the particles proceeds by step-addition from initially nucleated clusters in the absence of excess phosphinic or carboxylic acids, which adsorb as their anionic conjugate bases, and proceeds indirectly by dissolution of clusters, and subsequent chain-addition of monomers to stable clusters (Ostwald ripening) in the presence of excess phosphinic or carboxylic acid. Fusion of clusters by step-growth polymerization is an explanation for the consistent observation of so-called "magic-sized" clusters in QD growth reactions. Living chain-addition (chain addition with no explicit termination step) produces QDs over a larger range of sizes with better size dispersity than step-addition. Tuning the molar ratio of surfactant to Se(2-)(S(2-)), the limiting ionic reagent, within the living chain-addition polymerization allows for stoichiometric control of QD radius without relying on reaction time.

  8. A point implicit time integration technique for slow transient flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-05-01

    We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less

  9. Executive function is necessary for the regulation of the stepping activity when stepping in place in older adults.

    PubMed

    Dalton, Christopher; Sciadas, Ria; Nantel, Julie

    2016-10-01

    To determine the effect of age on stepping performance and to compare the cognitive demand required to regulate repetitive stepping between older and younger adults while performing a stepping in place task (SIP). Fourteen younger (25.4 ± 6.5) and 15 older adults (71.0 ± 9.0) participated in this study. They performed a seated category fluency task and Stroop test, followed by a 60 s SIP task. Following this, both the cognitive and motor tasks were performed simultaneously. We assessed cognitive performance, SIP cycle duration, asymmetry, and arrhythmicity. Compared to younger adults, older adults had larger SIP arrhythmicity both as a single task and when combined with the Category (p < 0.001) and Stroop (p < 0.01) tasks. Older adults also had larger arrhythmicity when dual tasking compared to SIP alone (p < 0.001). Older adults showed greater SIP asymmetry when combined with Category (p = 0.006) and Stroop (p = 0.06) tasks. Finally, they had lower cognitive performance than younger adults in both single and dual tasks (p < 0.01). Age and type of cognitive task performed with the motor task affected different components of stepping. While SIP arrhythmicity was larger for all conditions in older compared to younger adults, cycle duration was not different, and asymmetry tended to be larger during SIP when paired with a verbal fluency task. SIP does not require a high level of control for dynamic stability, therefore demonstrating that higher-level executive function is necessary for the regulation of stepping activity independently of the regulation of postural balance. Furthermore, older adults may lack the cognitive resources needed to adequately regulate stepping activity while performing a cognitive task relying on the executive function.

  10. Characteristics of deacetylation and depolymerization of β-chitin from jumbo squid (Dosidicus gigas) pens.

    PubMed

    Jung, Jooyeoun; Zhao, Yanyun

    2011-09-27

    This study evaluated the deacetylation characteristics of β-chitin from jumbo squid (Dosidicus gigas) pens by using strongly alkaline solutions of NaOH or KOH. Taguchi design was employed to investigate the effect of reagent concentration, temperature, time, and treatment step on molecular mass (MM) and degree of deacetylation (DDA) of the chitosan obtained. The optimal treatment conditions for achieving high MM and DDA of chitosan were identified as: 40% NaOH at 90°C for 6h with three separate steps (2h+2h+2h) or 50% NaOH at 90°C for 6h with one step, or 50% KOH at 90°C for 4h with three steps (1h+1h+2h) or 6h with one step. The most important factor affecting DDA and MM was temperature and time, respectively. The chitosan obtained was then further depolymerized by cellulase or lysozyme with cellulase giving a higher degradation ratio, lower relative viscosity, and a larger amount of reducing-end formations than that of lysozyme due to its higher susceptibility. This study demonstrated that jumbo squid pens are a good source of materials to produce β-chitosan with high DDA and a wide range of MM for various potential applications. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Time spent in sedentary posture is associated with waist circumference and cardiovascular risk.

    PubMed

    Tigbe, W W; Granat, M H; Sattar, N; Lean, M E J

    2017-05-01

    The relationship between metabolic risk and time spent sitting, standing and stepping has not been well established. The present study aimed to determine associations of objectively measured time spent siting, standing and stepping, with coronary heart disease (CHD) risk. A cross-sectional study of healthy non-smoking Glasgow postal workers, n=111 (55 office workers, 5 women, and 56 walking/delivery workers, 10 women), who wore activPAL physical activity monitors for 7 days. Cardiovascular risks were assessed by metabolic syndrome categorisation and 10-year PROCAM (prospective cardiovascular Munster) risk. Mean (s.d.) age was 40 (8) years, body mass index 26.9 (3.9) kg m -2 and waist circumference 95.4 (11.9) cm. Mean (s.d.) high-density lipoprotein cholesterol (HDL cholesterol) 1.33 (0.31), low-density lipoprotein cholesterol 3.11 (0.87), triglycerides 1.23 (0.64) mmol l -1 and 10-year PROCAM risk 1.8 (1.7)%. The participants spent mean (s.d.) 9.1 (1.8) h per day sedentary, 7.6 (1.2) h per day sleeping, 3.9 (1.1) h per day standing and 3.3 (0.9) h per day stepping, accumulating 14 708 (4984) steps per day in 61 (25) sit-to-stand transitions per day. In univariate regressions-adjusting for age, sex, family history of CHD, shift worked, job type and socioeconomic status-waist circumference (P=0.005), fasting triglycerides (P=0.002), HDL cholesterol (P=0.001) and PROCAM risk (P=0.047) were detrimentally associated with sedentary time. These associations remained significant after further adjustment for sleep, standing and stepping in stepwise regression models. However, after further adjustment for waist circumference, the associations were not significant. Compared with those without the metabolic syndrome, participants with the metabolic syndrome were significantly less active-fewer steps, shorter stepping duration and longer time sitting. Those with no metabolic syndrome features walked >15 000 steps per day or spent >7 h per day upright. Longer time spent in sedentary posture is significantly associated with higher CHD risk and larger waist circumference.

  12. Configuring Airspace Sectors with Approximate Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  13. A spectral radius scaling semi-implicit iterative time stepping method for reactive flow simulations with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin

    2018-09-01

    A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.

  14. Parental Leave of Absence: Time for the Next Step

    DTIC Science & Technology

    1998-03-01

    personnel. This can affect both mission performance and retention rates.Ś Maintaining a force in which every soldier is available for worldwide...ability to perform during wartime.ൢ Although there are more male single parents in the Army, a larger percentage of women are single parents. Therefore...costs, workforce turbulence, and absenteeism down. While civilian programs are not always compatible with the military’s particular needs, one program

  15. Between a Map and a Data Rod

    NASA Technical Reports Server (NTRS)

    Teng, William; Rui, Hualan; Strub, Richard; Vollmer, Bruce

    2015-01-01

    A Digital Divide has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or maps) and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported data rods project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectivesconstraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly (virtual) data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a years worth of time series for hourly data (9,000 time steps) in 90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.

  16. Between a Map and a Data Rod

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Rui, H.; Strub, R. F.; Vollmer, B.

    2015-12-01

    A "Digital Divide" has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or "maps") and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported "data rods" project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectives/constraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly ("virtual") data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a year's worth of time series for hourly data (~9,000 time steps) in ~90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.

  17. Generalized Green's function molecular dynamics for canonical ensemble simulations

    NASA Astrophysics Data System (ADS)

    Coluci, V. R.; Dantas, S. O.; Tewary, V. K.

    2018-05-01

    The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.

  18. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  19. Daily activity during stability and exacerbation of chronic obstructive pulmonary disease.

    PubMed

    Alahmari, Ayedh D; Patel, Anant R C; Kowlessar, Beverly S; Mackay, Alex J; Singh, Richa; Wedzicha, Jadwiga A; Donaldson, Gavin C

    2014-06-02

    During most COPD exacerbations, patients continue to live in the community but there is little information on changes in activity during exacerbations due to the difficulties of obtaining recent, prospective baseline data. Patients recorded on daily diary cards any worsening in respiratory symptoms, peak expiratory flow (PEF) and the number of steps taken per day measured with a Yamax Digi-walker pedometer. Exacerbations were defined by increased respiratory symptoms and the number of exacerbations experienced in the 12 months preceding the recording of daily step count used to divide patients into frequent (> = 2/year) or infrequent exacerbators. The 73 COPD patients (88% male) had a mean (±SD) age 71(±8) years and FEV1 53(±16)% predicted. They recorded pedometer data on a median 198 days (IQR 134-353). At exacerbation onset, symptom count rose by 1.9(±1.3) and PEF fell by 7(±13) l/min. Mean daily step count fell from 4154(±2586) steps/day during a preceding baseline week to 3673(±2258) step/day during the initial 7 days of exacerbation (p = 0.045). Patients with larger falls in activity at exacerbation took longer to recover to stable level (rho = -0.56; p < 0.001). Recovery in daily step count was faster (median 3.5 days) than for exacerbation symptoms (median 11 days; p < 0.001). Recovery in step count was also faster in untreated compared to treated exacerbation (p = 0.030).Daily step count fell faster over time in the 40 frequent exacerbators, by 708 steps/year, compared to 338 steps/year in 33 infrequent exacerbators (p = 0.002). COPD exacerbations reduced physical activity and frequent exacerbations accelerate decline in activity over time.

  20. Lagrangian Statistics and Intermittency in Gulf of Mexico.

    PubMed

    Lin, Liru; Zhuang, Wei; Huang, Yongxiang

    2017-12-12

    Due to the nonlinear interaction between different flow patterns, for instance, ocean current, meso-scale eddies, waves, etc, the movement of ocean is extremely complex, where a multiscale statistics is then relevant. In this work, a high time-resolution velocity with a time step 15 minutes obtained by the Lagrangian drifter deployed in the Gulf of Mexico (GoM) from July 2012 to October 2012 is considered. The measured Lagrangian velocity correlation function shows a strong daily cycle due to the diurnal tidal cycle. The estimated Fourier power spectrum E(f) implies a dual-power-law behavior which is separated by the daily cycle. The corresponding scaling exponents are close to -1.75 and -2.75 respectively for the time scale larger (resp. 0.1 ≤ f ≤ 0.4 day -1 ) and smaller (resp. 2 ≤ f ≤ 8 day -1 ) than 1 day. A Hilbert-based approach is then applied to this data set to identify the possible multifractal property of the cascade process. The results show an intermittent dynamics for the time scale larger than 1 day, while a less intermittent dynamics for the time scale smaller than 1 day. It is speculated that the energy is partially injected via the diurnal tidal movement and then transferred to larger and small scales through a complex cascade process, which needs more studies in the near future.

  1. The Effect of Forward-Facing Steps on Stationary Crossflow Instability Growth and Breakdown

    NASA Technical Reports Server (NTRS)

    Eppink, Jenna L.

    2018-01-01

    The e?ect of a forward-facing step on stationary cross?ow transition was studied using standard stereo particle image velocimetry (PIV) and time-resolved PIV. Step heights ranging from 53 to 71% of the boundary-layer thickness were studied in detail. The steps above a critical step height of approximately 60% of the boundary-layer thickness had a signi?cant impact on the stationary cross?ow growth downstream of the step. For the critical cases, the stationary cross?ow amplitude grew suddenly downstream of the step, decayed for a short region, then grew again. The adverse pressure gradient upstream of the step resulted in a region of cross?ow reversal. A secondary set of vortices, rotating in the opposite direction to the primary vortices, developed underneath the uplifted primary vortices. The wall-normal velocity disturbance (V' ) created by these secondary vortices impacted the step, and is believed to feed into the strong vortex that developed downstream of the step. A large but very short negative cross?ow region formed for a short region downstream of the step due to a sharp inboard curvature of the streamlines near the wall. For the larger step height cases, a cross?ow-reversal region formed just downstream of the strong negative cross?ow region. This cross?ow reversal region is believed to play an important role in the growth of the stationary cross?ow vortices downstream of the step, and may be a good indication of the critical forward-facing step height.

  2. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-01

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  3. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  4. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation.

    PubMed

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-28

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  5. Multi-scale Slip Inversion Based on Simultaneous Spatial and Temporal Domain Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Liu, W.; Yao, H.; Yang, H. Y.

    2017-12-01

    Finite fault inversion is a widely used method to study earthquake rupture processes. Some previous studies have proposed different methods to implement finite fault inversion, including time-domain, frequency-domain, and wavelet-domain methods. Many previous studies have found that different frequency bands show different characteristics of the seismic rupture (e.g., Wang and Mori, 2011; Yao et al., 2011, 2013; Uchide et al., 2013; Yin et al., 2017). Generally, lower frequency waveforms correspond to larger-scale rupture characteristics while higher frequency data are representative of smaller-scale ones. Therefore, multi-scale analysis can help us understand the earthquake rupture process thoroughly from larger scale to smaller scale. By the use of wavelet transform, the wavelet-domain methods can analyze both the time and frequency information of signals in different scales. Traditional wavelet-domain methods (e.g., Ji et al., 2002) implement finite fault inversion with both lower and higher frequency signals together to recover larger-scale and smaller-scale characteristics of the rupture process simultaneously. Here we propose an alternative strategy with a two-step procedure, i.e., firstly constraining the larger-scale characteristics with lower frequency signals, and then resolving the smaller-scale ones with higher frequency signals. We have designed some synthetic tests to testify our strategy and compare it with the traditional one. We also have applied our strategy to study the 2015 Gorkha Nepal earthquake using tele-seismic waveforms. Both the traditional method and our two-step strategy only analyze the data in different temporal scales (i.e., different frequency bands), while the spatial distribution of model parameters also shows multi-scale characteristics. A more sophisticated strategy is to transfer the slip model into different spatial scales, and then analyze the smooth slip distribution (larger scales) with lower frequency data firstly and more detailed slip distribution (smaller scales) with higher frequency data subsequently. We are now implementing the slip inversion using both spatial and temporal domain wavelets. This multi-scale analysis can help us better understand frequency-dependent rupture characteristics of large earthquakes.

  6. Combining Fast-Walking Training and a Step Activity Monitoring Program to Improve Daily Walking Activity After Stroke: A Preliminary Study.

    PubMed

    Danks, Kelly A; Pohlig, Ryan; Reisman, Darcy S

    2016-09-01

    To determine preliminary efficacy and to identify baseline characteristics predicting who would benefit most from fast walking training plus a step activity monitoring program (FAST+SAM) compared with fast walking training (FAST) alone in persons with chronic stroke. Randomized controlled trial with blinded assessors. Outpatient clinical research laboratory. Individuals (N=37) >6 months poststroke. Subjects were assigned to either FAST, which was walking training at their fastest possible speed on the treadmill (30min) and overground 3 times per week for 12 weeks, or FAST+SAM. The step activity monitoring program consisted of daily step monitoring with an activity monitor, goal setting, and identification of barriers to activity and strategies to overcome barriers. Daily step activity metrics (steps/day [SPD], time walking per day), walking speed, and 6-minute walk test (6MWT) distance. There was a significant effect of time for both groups, with all outcomes improving from pre- to posttraining (all P values <.05). The FAST+SAM was superior to FAST for 6MWT (P=.018), with a larger increase in the FAST+SAM group. The interventions had differential effectiveness based on baseline step activity. Sequential moderated regression models demonstrated that for subjects with baseline levels of step activity and 6MWT distances that were below the mean, the FAST+SAM intervention was more effective than FAST (1715±1584 vs 254±933 SPD; P<.05 for overall model and ΔR(2) for SPD and 6MWT). The addition of a step activity monitoring program to a fast walking training intervention may be most effective in persons with chronic stroke who have initial low levels of walking endurance and activity. Regardless of baseline performance, the FAST+SAM intervention was more effective for improving walking endurance. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  7. Scientists Shaping the Discussion

    NASA Astrophysics Data System (ADS)

    Abraham, J. A.; Weymann, R.; Mandia, S. A.; Ashley, M.

    2011-12-01

    Scientific studies which directly impact the larger society require an engagement between the scientists and the larger public. With respect to research on climate change, many third-party groups report on scientific findings and thereby serve as an intermediary between the scientist and the public. In many cases, the third-party reporting misinterprets the findings and conveys inaccurate information to the media and the public. To remedy this, many scientists are now taking a more active role in conveying their work directly to interested parties. In addition, some scientists are taking the further step of engaging with the general public to answer basic questions related to climate change - even on sub-topics which are unrelated to scientists' own research. Nevertheless, many scientists are reluctant to engage the general public or the media. The reasons for scientific reticence are varied but most commonly are related to fear of public engagement, concern about the time required to properly engage the public, or concerns about the impact to their professional reputations. However, for those scientists who are successful, these engagement activities provide many benefits. Scientists can increase the impact of their work, and they can help society make informed choices on significant issues, such as mitigating global warming. Here we provide some concrete steps that scientists can take to ensure that their public engagement is successful. These steps include: (1) cultivating relationships with reporters, (2) crafting clear, easy to understand messages that summarize their work, (3) relating science to everyday experiences, and (4) constructing arguments which appeal to a wide-ranging audience. With these steps, we show that scientists can efficiently deal with concerns that would otherwise inhibit their public engagement. Various resources will be provided that allow scientists to continue work on these key steps.

  8. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  9. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  10. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  11. Knee Joint Kinematics and Kinetics During a Lateral False-Step Maneuver

    PubMed Central

    Golden, Grace M.; Pavol, Michael J.; Hoffman, Mark A.

    2009-01-01

    Abstract Context: Cutting maneuvers have been implicated as a mechanism of noncontact anterior cruciate ligament (ACL) injuries in collegiate female basketball players. Objective: To investigate knee kinematics and kinetics during running when the width of a single step, relative to the path of travel, was manipulated, a lateral false-step maneuver. Design: Crossover design. Setting: University biomechanics laboratory. Patients or Other Participants: Thirteen female collegiate basketball athletes (age  =  19.7 ± 1.1 years, height  =  172.3 ± 8.3 cm, mass  =  71.8 ± 8.7 kg). Intervention(s): Three conditions: normal straight-ahead running, lateral false step of width 20% of body height, and lateral false step of width 35% of body height. Main Outcome Measure(s): Peak angles and internal moments for knee flexion, extension, abduction, adduction, internal rotation, and external rotation. Results: Differences were noted among conditions in peak knee angles (flexion [P < .01], extension [P  =  .02], abduction [P < .01], and internal rotation [P < .01]) and peak internal knee moments (abduction [P < .01], adduction [P < .01], and internal rotation [P  =  .03]). The lateral false step of width 35% of body height was associated with larger peak flexion, abduction, and internal rotation angles and larger peak abduction, adduction, and internal rotation moments than normal running. Peak flexion and internal rotation angles were also larger for the lateral false step of width 20% of body height than for normal running, whereas peak extension angle was smaller. Peak internal rotation angle increased progressively with increasing step width. Conclusions: Performing a lateral false-step maneuver resulted in changes in knee kinematics and kinetics compared with normal running. The differences observed for lateral false steps were consistent with proposed mechanisms of ACL loading, suggesting that lateral false steps represent a hitherto neglected mechanism of noncontact ACL injury. PMID:19771289

  12. Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.

    PubMed

    Zheng, Mo; Li, Xiaoxia; Guo, Li

    2013-04-01

    Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  14. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  15. Performance analysis of parallel gravitational N-body codes on large GPU clusters

    NASA Astrophysics Data System (ADS)

    Huang, Si-Yi; Spurzem, Rainer; Berczik, Peter

    2016-01-01

    We compare the performance of two very different parallel gravitational N-body codes for astrophysical simulations on large Graphics Processing Unit (GPU) clusters, both of which are pioneers in their own fields as well as on certain mutual scales - NBODY6++ and Bonsai. We carry out benchmarks of the two codes by analyzing their performance, accuracy and efficiency through the modeling of structure decomposition and timing measurements. We find that both codes are heavily optimized to leverage the computational potential of GPUs as their performance has approached half of the maximum single precision performance of the underlying GPU cards. With such performance we predict that a speed-up of 200 - 300 can be achieved when up to 1k processors and GPUs are employed simultaneously. We discuss the quantitative information about comparisons of the two codes, finding that in the same cases Bonsai adopts larger time steps as well as larger relative energy errors than NBODY6++, typically ranging from 10 - 50 times larger, depending on the chosen parameters of the codes. Although the two codes are built for different astrophysical applications, in specified conditions they may overlap in performance at certain physical scales, thus allowing the user to choose either one by fine-tuning parameters accordingly.

  16. Physical micro-environment interventions for healthier eating in the workplace: protocol for a stepped wedge randomised controlled pilot trial.

    PubMed

    Vasiljevic, Milica; Cartwright, Emma; Pechey, Rachel; Hollands, Gareth J; Couturier, Dominique-Laurent; Jebb, Susan A; Marteau, Theresa M

    2017-01-01

    An estimated one third of energy is consumed in the workplace. The workplace is therefore an important context in which to reduce energy consumption to tackle the high rates of overweight and obesity in the general population. Altering environmental cues for food selection and consumption-physical micro-environment or 'choice architecture' interventions-has the potential to reduce energy intake. The first aim of this pilot trial is to estimate the potential impact upon energy purchased of three such environmental cues (size of portions, packages and tableware; availability of healthier vs. less healthy options; and energy labelling) in workplace cafeterias. A second aim of this pilot trial is to examine the feasibility of recruiting eligible worksites, and identify barriers to the feasibility and acceptability of implementing the interventions in preparation for a larger trial. Eighteen worksite cafeterias in England will be assigned to one of three intervention groups to assess the impact on energy purchased of altering (a) portion, package and tableware size ( n  = 6); (b) availability of healthier options ( n  = 6); and (c) energy (calorie) labelling ( n  = 6). Using a stepped wedge design, sites will implement allocated interventions at different time periods, as randomised. This pilot trial will examine the feasibility of recruiting eligible worksites, and the feasibility and acceptability of implementing the interventions in preparation for a larger trial. In addition, a series of linear mixed models will be used to estimate the impact of each intervention on total energy (calories) purchased per time frame of analysis (daily or weekly) controlling for the total sales/transactions adjusted for calendar time and with random effects for worksite. These analyses will allow an estimate of an effect size of each of the three proposed interventions, which will form the basis of the sample size calculations necessary for a larger trial. ISRCTN52923504.

  17. Cyclic steps due to the surge-type turbidity currents in flume experiments: effect of surge duration on the topography of steps

    NASA Astrophysics Data System (ADS)

    Yokokawa, Miwa; Yamano, Junpei; Miyai, Masatomo; Hughes Clarke, John; Izumi, Norihiro

    2017-04-01

    Field observations of turbidity currents and seabed topography on the Squamish delta in British Columbia, Canada revealed that cyclic steps formed by the surge-type turbidity currents (e.g., Hughes Clarke et al., 2014). The high-density portion of the flow, which affects the sea floor morphology, lasted only 30-60 seconds. We are doing flume experiments aiming to investigate the relationship between the condition of surges and topography of resultant steps. In this presentation, we are going to discuss about the effect of surge duration on the topography of steps. The experiments have been performed at Osaka Institute of Technology. A flume, which is 7.0 m long, 0.3 m deep and 2 cm wide, was suspended in a larger tank, which is 7.6 m long, 1.2 m deep and 0.3 m wide, filled with water. The inner flume tilted at 7 degrees. As a source of turbidity currents, mixture of salt water (1.17 g/cm^3) and plastic particles (1.3 g/cm^3, 0.1-0.18 mm in diameter) was prepared. The concentration of the sediments was 6.1 weight % (5.5 volume %) in the head tank. This mixture of salt water and plastic particles poured into the upstream end of the inner flume from head tank for 3 seconds or 7 seconds. 140 surges were made respectively. Discharge of the currents were fluctuated but range from 306 to 870 mL for 3s-surge, and from 1134 to 2030 mL for 7s-surge. As a result, five or six steps were formed respectively. At the case of 3s-surge, steps located at upstream portion of the flume moved vigorously toward upstream direction, whereas steps at downstream portion of the flume moved toward upstream direction at the case of 7s-surge. The wavelengths and wave heights of the steps by 3s-surge are larger than those of 7s-surge at the upstream portion of the flume, but the size of steps of 3s-surge are smaller than those of 7s-surge at the downstream portion of the flume. In this condition of slope and concentration, the longer surge duration, i.e. larger discharge of the current transports the sediment further and makes the steps larger and active at the further location from the source of the currents.

  18. Avionics Integrity Issues Presented during NAECON (National Aerospace and Electronics Convention) 1984.

    DTIC Science & Technology

    1984-12-01

    PLASTIC ABOVE BOOC 2. PTH COPPER SHOULD BE ABOVE 62 ELONGATION, 5OKPSI 3. PTH LIFE IS INCREAS-D WITH LARGER PTH AND POLYIMIDE PWB’S 4. SOLDER JOINT DEFO M...34XTION IS PREDOMINANTLY PLASTIC , THEREFORE HIGH SOLDER DUCTILITY IS IMPORTANT - AVOID SOLDER CONTAMINANTS - AVOID HOT STORAGE OR SLOW COOLDOWN SUMMARY...UTILIZATION OF STEP-STRESS TECHNIQUE o MANAGEMENT CONSIDERATIONS * PLASTIC PARTS AFAILURE ANALYSIS/DPA 0SHELF TIME VS RESCREEN *USE OF ACCELERATED

  19. On the existence of the optimal order for wavefunction extrapolation in Born-Oppenheimer molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Jun; Wang, Han, E-mail: wang-han@iapcm.ac.cn; CAEP Software Center for High Performance Numerical Simulation, Beijing

    2016-06-28

    Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn–Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps ormore » more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.« less

  20. Simplified energy-balance model for pragmatic multi-dimensional device simulation

    NASA Astrophysics Data System (ADS)

    Chang, Duckhyun; Fossum, Jerry G.

    1997-11-01

    To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.

  1. Core-shell TiO2@ZnO nanorods for efficient ultraviolet photodetection

    NASA Astrophysics Data System (ADS)

    Panigrahi, Shrabani; Basak, Durga

    2011-05-01

    Core-shell TiO2@ZnO nanorods (NRs) have been fabricated by a simple two step method: growth of ZnO NRs' array by an aqueous chemical technique and then coating of the NRs with a solution of titanium isopropoxide [Ti(OC3H7)4] followed by a heating step to form the shell. The core-shell nanocomposites are composed of single-crystalline ZnO NRs, coated with a thin TiO2 shell layer obtained by varying the number of coatings (one, three and five times). The ultraviolet (UV) emission intensity of the nanocomposite is largely quenched due to an efficient electron-hole separation reducing the band-to-band recombinations. The UV photoconductivity of the core-shell structure with three times TiO2 coating has been largely enhanced due to photoelectron transfer between the core and the shell. The UV photosensitivity of the nanocomposite becomes four times larger while the photocurrent decay during steady UV illumination has been decreased almost by 7 times compared to the as-grown ZnO NRs indicating high efficiency of these core-shell structures as UV sensors.

  2. Magnetic Resonance Imaging-Guided Adaptive Radiation Therapy: A "Game Changer" for Prostate Treatment?

    PubMed

    Pathmanathan, Angela U; van As, Nicholas J; Kerkmeijer, Linda G W; Christodouleas, John; Lawton, Colleen A F; Vesprini, Danny; van der Heide, Uulke A; Frank, Steven J; Nill, Simeon; Oelfke, Uwe; van Herk, Marcel; Li, X Allen; Mittauer, Kathryn; Ritter, Mark; Choudhury, Ananya; Tree, Alison C

    2018-02-01

    Radiation therapy to the prostate involves increasingly sophisticated delivery techniques and changing fractionation schedules. With a low estimated α/β ratio, a larger dose per fraction would be beneficial, with moderate fractionation schedules rapidly becoming a standard of care. The integration of a magnetic resonance imaging (MRI) scanner and linear accelerator allows for accurate soft tissue tracking with the capacity to replan for the anatomy of the day. Extreme hypofractionation schedules become a possibility using the potentially automated steps of autosegmentation, MRI-only workflow, and real-time adaptive planning. The present report reviews the steps involved in hypofractionated adaptive MRI-guided prostate radiation therapy and addresses the challenges for implementation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Metal organic framework materials are used in many energy-efficient and green technologies. PNNL researchers may bring their commercial use a step closer to reality by developing a new way to create these materials in larger quantities, better qualities, and more quickly than ever before. This video is a step-by-step look at how our PNNL scientists create MOFs with 80% efficacy.

  4. Stereo Particle Image Velocimetry Measurements of Transition Downstream of a Forward-Facing Step in a Swept-Wing Boundary Layer

    NASA Technical Reports Server (NTRS)

    Eppink, Jenna L.

    2017-01-01

    Stereo particle image velocimetry measurements were performed downstream of a forward-facing step in a stationary-crossflow dominated flow. Three different step heights were studied with the same leading-edge roughness configuration to determine the effect of the step on the evolution of the stationary-crossflow. Above the critical step height, which is approximately 68% of the boundary-layer thickness at the step, the step caused a significant increase in the growth of the stationary crossflow. For the largest step height studied (68%), premature transition occurred shortly downstream of the step. The stationary crossflow amplitude only reached approximately 7% of U(sub e) in this case, which suggests that transition does not occur via the high-frequency secondary instabilities typically associated with stationary crossflow transition. The next largest step of 60% delta still caused a significant impact on the growth of the stationary crossflow downstream of the step, but the amplitude eventually returned to that of the baseline case, and the transition front remained the same. The smallest step height (56%) only caused a small increase in the stationary crossflow amplitude and no change in the transition front. A final case was studied in which the roughness on the leading edge of the model was enhanced for the lowest step height case to determine the impact of the stationary crossflow amplitude on transition. The stationary crossflow amplitude was increased by approximately four times, which resulted in premature transition for this step height. However, some notable differences were observed in the behavior of the stationary crossflow mode, which indicate that the interaction mechanism which results in the increased growth of the stationary crossflow downstream of the step may be different in this case compared to the larger step heights.

  5. Molecular dynamics with rigid bodies: Alternative formulation and assessment of its limitations when employed to simulate liquid water

    NASA Astrophysics Data System (ADS)

    Silveira, Ana J.; Abreu, Charlles R. A.

    2017-09-01

    Sets of atoms collectively behaving as rigid bodies are often used in molecular dynamics to model entire molecules or parts thereof. This is a coarse-graining strategy that eliminates degrees of freedom and supposedly admits larger time steps without abandoning the atomistic character of a model. In this paper, we rely on a particular factorization of the rotation matrix to simplify the mechanical formulation of systems containing rigid bodies. We then propose a new derivation for the exact solution of torque-free rotations, which are employed as part of a symplectic numerical integration scheme for rigid-body dynamics. We also review methods for calculating pressure in systems of rigid bodies with pairwise-additive potentials and periodic boundary conditions. Finally, simulations of liquid phases, with special focus on water, are employed to analyze the numerical aspects of the proposed methodology. Our results show that energy drift is avoided for time step sizes up to 5 fs, but only if a proper smoothing is applied to the interatomic potentials. Despite this, the effects of discretization errors are relevant, even for smaller time steps. These errors induce, for instance, a systematic failure of the expected equipartition of kinetic energy between translational and rotational degrees of freedom.

  6. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  7. The effects of aging on postural control and selective attention when stepping down while performing a concurrent auditory response task.

    PubMed

    Tsang, William W N; Lam, Nazca K Y; Lau, Kit N L; Leung, Harry C H; Tsang, Crystal M S; Lu, Xi

    2013-12-01

    To investigate the effects of aging on postural control and cognitive performance in single- and dual-tasking. A cross-sectional comparative design was conducted in a university motion analysis laboratory. Young adults (n = 30; age 21.9 ± 2.4 years) and older adults (n = 30; age 71.9 ± 6.4 years) were recruited. Postural control after stepping down was measured with and without performing a concurrent auditory response task. Measurement included: (1) reaction time and (2) error rate in performing the cognitive task; (3) total sway path and (4) total sway area after stepping down. Our findings showed that the older adults had significantly longer reaction times and higher error rates than the younger subjects in both the single-tasking and dual-tasking conditions. The older adults had significantly longer reaction times and higher error rates when dual-tasking compared with single-tasking, but the younger adults did not. The older adults demonstrated significantly less total sway path, but larger total sway area in single-leg stance after stepping down than the young adults. The older adults showed no significant change in total sway path and area between the dual-tasking and when compared with single-tasking conditions, while the younger adults showed significant decreases in sway. Older adults prioritize postural control by sacrificing cognitive performance when faced with dual-tasking.

  8. Daily activity during stability and exacerbation of chronic obstructive pulmonary disease

    PubMed Central

    2014-01-01

    Background During most COPD exacerbations, patients continue to live in the community but there is little information on changes in activity during exacerbations due to the difficulties of obtaining recent, prospective baseline data. Methods Patients recorded on daily diary cards any worsening in respiratory symptoms, peak expiratory flow (PEF) and the number of steps taken per day measured with a Yamax Digi-walker pedometer. Exacerbations were defined by increased respiratory symptoms and the number of exacerbations experienced in the 12 months preceding the recording of daily step count used to divide patients into frequent (> = 2/year) or infrequent exacerbators. Results The 73 COPD patients (88% male) had a mean (±SD) age 71(±8) years and FEV1 53(±16)% predicted. They recorded pedometer data on a median 198 days (IQR 134–353). At exacerbation onset, symptom count rose by 1.9(±1.3) and PEF fell by 7(±13) l/min. Mean daily step count fell from 4154(±2586) steps/day during a preceding baseline week to 3673(±2258) step/day during the initial 7 days of exacerbation (p = 0.045). Patients with larger falls in activity at exacerbation took longer to recover to stable level (rho = −0.56; p < 0.001). Recovery in daily step count was faster (median 3.5 days) than for exacerbation symptoms (median 11 days; p < 0.001). Recovery in step count was also faster in untreated compared to treated exacerbation (p = 0.030). Daily step count fell faster over time in the 40 frequent exacerbators, by 708 steps/year, compared to 338 steps/year in 33 infrequent exacerbators (p = 0.002). Conclusions COPD exacerbations reduced physical activity and frequent exacerbations accelerate decline in activity over time. PMID:24885188

  9. Vectorization of a classical trajectory code on a floating point systems, Inc. Model 164 attached processor.

    PubMed

    Kraus, Wayne A; Wagner, Albert F

    1986-04-01

    A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.

  10. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-15

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less

  11. A stabilized Runge-Kutta-Legendre method for explicit super-time-stepping of parabolic and mixed equations

    NASA Astrophysics Data System (ADS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.

  12. Initiation of forward gait with lateral occurrence of emotional stimuli: general findings and relevance for pedestrians crossing roads.

    PubMed

    Caffier, D; Gillet, C; Heurley, L P; Bourrelly, A; Barbier, F; Naveteur, J

    2017-03-01

    With reference to theoretical models regarding links between emotions and actions, the present study examined whether the lateral occurrence of an emotional stimulus influences spatial and temporal parameters of gait initiation in 18 younger and 18 older healthy adults. In order to simulate road-crossing hazard for pedestrians, slides of approaching cars were used and they were presented in counterbalanced order with threatening slides from the International Affective Picture System (IAPS) and control slides of safe walking areas. Each slide was presented on the left side of the participant once the first step was initiated. The results evidenced medio-lateral shifts to the left for the first step (right foot) and to the right for the second step (left foot). These shifts were both modulated by the slide contents in such a way that the resulting distance between the screen and the foot (right or left) was larger with the IAPS and traffic slides than with the control slides. The slides did not affect the base of support, step length, step velocity and time of double support. Advancing age influenced the subjective impact of the slides and gait characteristics, but did not modulate medio-lateral shifts. The data extend evidence of fast, emotional modulation of stepping, with theoretical and applied consequences.

  13. Preclinical Studies for Cartilage Repair

    PubMed Central

    Hurtig, Mark B.; Buschmann, Michael D.; Fortier, Lisa A.; Hoemann, Caroline D.; Hunziker, Ernst B.; Jurvelin, Jukka S.; Mainil-Varlet, Pierre; McIlwraith, C. Wayne; Sah, Robert L.; Whiteside, Robert A.

    2011-01-01

    Investigational devices for articular cartilage repair or replacement are considered to be significant risk devices by regulatory bodies. Therefore animal models are needed to provide proof of efficacy and safety prior to clinical testing. The financial commitment and regulatory steps needed to bring a new technology to clinical use can be major obstacles, so the implementation of highly predictive animal models is a pressing issue. Until recently, a reductionist approach using acute chondral defects in immature laboratory species, particularly the rabbit, was considered adequate; however, if successful and timely translation from animal models to regulatory approval and clinical use is the goal, a step-wise development using laboratory animals for screening and early development work followed by larger species such as the goat, sheep and horse for late development and pivotal studies is recommended. Such animals must have fully organized and mature cartilage. Both acute and chronic chondral defects can be used but the later are more like the lesions found in patients and may be more predictive. Quantitative and qualitative outcome measures such as macroscopic appearance, histology, biochemistry, functional imaging, and biomechanical testing of cartilage, provide reliable data to support investment decisions and subsequent applications to regulatory bodies for clinical trials. No one model or species can be considered ideal for pivotal studies, but the larger animal species are recommended for pivotal studies. Larger species such as the horse, goat and pig also allow arthroscopic delivery, and press-fit or sutured implant fixation in thick cartilage as well as second look arthroscopies and biopsy procedures. PMID:26069576

  14. A Semi-implicit Method for Time Accurate Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2001-11-01

    A semi-implicit method for time accurate simulation of compressible flow is presented. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity. Centered discretization in both time and space allows the method to achieve zero artificial attenuation of acoustic waves. The method is an extension of the standard low Mach number pressure correction method to the compressible Navier-Stokes equations, and the main feature of the method is the solution of a Helmholtz type pressure correction equation similar to that of Demirdžić et al. (Int. J. Num. Meth. Fluids, Vol. 16, pp. 1029-1050, 1993). The method is attractive for simulation of acoustic combustion instabilities in practical combustors. In these flows, the Mach number is low; therefore the time step allowed by the convective CFL limitation is significantly larger than that allowed by the acoustic CFL limitation, resulting in significant efficiency gains. Also, the method's property of zero artificial attenuation of acoustic waves is important for accurate simulation of the interaction between acoustic waves and the combustion process. The method has been implemented in a large eddy simulation code, and results from several test cases will be presented.

  15. A successful backward step correlates with hip flexion moment of supporting limb in elderly people.

    PubMed

    Takeuchi, Yahiko

    2018-01-01

    The objective of this study was to determine the positional relationship between the center of mass (COM) and the center of pressure (COP) at the time of step landing, and to examine their relationship with the joint moments exerted by the supporting limb, with regard to factors of the successful backward step response. The study population comprised 8 community-dwelling elderly people that were observed to take successive multi steps after the landing of a backward stepping. Using a motion capture system and force plate, we measured the COM, COP and COM-COP deviation distance on landing during backward stepping. In addition, we measured the moment of the supporting limb joint during backward stepping. The multi-step data were compared with data from instances when only one step was taken (single-step). Variables that differed significantly between the single- and multi-step data were used as objective variables and the joint moments of the supporting limb were used as explanatory variables in single regression analyses. The COM-COP deviation in the anteroposterior was significantly larger in the single-step. A regression analysis with COM-COP deviation as the objective variable obtained a significant regression equation in the hip flexion moment (R2 = 0.74). The hip flexion moment of supporting limb was shown to be a significant explanatory variable in both the PS and SS phases for the relationship with COM-COP distance. This study found that to create an appropriate backward step response after an external disturbance (i.e. the ability to stop after 1 step), posterior braking of the COM by a hip flexion moment are important during the single-limbed standing phase.

  16. Coupled, Simultaneous Displacement and Dealloying Reactions into Fe-Ni-Co Nanowires for Thinning Nanowire Segments.

    PubMed

    Geng, Xiaohua; Podlaha, Elizabeth J

    2016-12-14

    A new methodology is reported to shape template-assisted electrodeposition of Fe-rich, Fe-Ni-Co nanowires to have a thin nanowire segment using a coupled displacement reaction with a more noble elemental ion, Cu(II), and at the same time dealloying predominantly Fe from Fe-Ni-Co by the reduction of protons (H + ), followed by a subsequent etching step. The displacement/dealloyed layer was sandwiched between two trilayers of Fe-Ni-Co to facilitate the characterization of the reaction front, or penetration length. The penetration length region was found to be a function of the ratio of proton and Cu(II) concentration, and a ratio of 0.5 was found to provide the largest penetration rate, and hence the larger thinned length of the nanowire. Altering the etching time affected the diameter of the thinned region. This methodology presents a new way to thin nanowire segments connected to larger nanowire sections and also introduces a way to study the propagation of a reaction front into a nanowire.

  17. Parallel ptychographic reconstruction

    DOE PAGES

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; ...

    2014-12-19

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps tomore » take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.« less

  18. Spectral anomaly methods for aerial detection using KUT nuisance rejection

    NASA Astrophysics Data System (ADS)

    Detwiler, R. S.; Pfund, D. M.; Myjak, M. J.; Kulisek, J. A.; Seifert, C. E.

    2015-06-01

    This work discusses the application and optimization of a spectral anomaly method for the real-time detection of gamma radiation sources from an aerial helicopter platform. Aerial detection presents several key challenges over ground-based detection. For one, larger and more rapid background fluctuations are typical due to higher speeds, larger field of view, and geographically induced background changes. As well, the possible large altitude or stand-off distance variations cause significant steps in background count rate as well as spectral changes due to increased gamma-ray scatter with detection at higher altitudes. The work here details the adaptation and optimization of the PNNL-developed algorithm Nuisance-Rejecting Spectral Comparison Ratios for Anomaly Detection (NSCRAD), a spectral anomaly method previously developed for ground-based applications, for an aerial platform. The algorithm has been optimized for two multi-detector systems; a NaI(Tl)-detector-based system and a CsI detector array. The optimization here details the adaptation of the spectral windows for a particular set of target sources to aerial detection and the tailoring for the specific detectors. As well, the methodology and results for background rejection methods optimized for the aerial gamma-ray detection using Potassium, Uranium and Thorium (KUT) nuisance rejection are shown. Results indicate that use of a realistic KUT nuisance rejection may eliminate metric rises due to background magnitude and spectral steps encountered in aerial detection due to altitude changes and geographically induced steps such as at land-water interfaces.

  19. Qdot Labeled Actin Super Resolution Motility Assay Measures Low Duty Cycle Muscle Myosin Step-Size

    PubMed Central

    Wang, Yihua; Ajtai, Katalin; Burghardt, Thomas P.

    2013-01-01

    Myosin powers contraction in heart and skeletal muscle and is a leading target for mutations implicated in inheritable muscle diseases. During contraction, myosin transduces ATP free energy into the work of muscle shortening against resisting force. Muscle shortening involves relative sliding of myosin and actin filaments. Skeletal actin filaments were fluorescence labeled with a streptavidin conjugate quantum dot (Qdot) binding biotin-phalloidin on actin. Single Qdot’s were imaged in time with total internal reflection fluorescence microscopy then spatially localized to 1-3 nanometers using a super-resolution algorithm as they translated with actin over a surface coated with skeletal heavy meromyosin (sHMM) or full length β-cardiac myosin (MYH7). Average Qdot-actin velocity matches measurements with rhodamine-phalloidin labeled actin. The sHMM Qdot-actin velocity histogram contains low velocity events corresponding to actin translation in quantized steps of ~5 nm. The MYH7 velocity histogram has quantized steps at 3 and 8 nm in addition to 5 nm, and, larger compliance than sHMM depending on MYH7 surface concentration. Low duty cycle skeletal and cardiac myosin present challenges for a single molecule assay because actomyosin dissociates quickly and the freely moving element diffuses away. The in vitro motility assay has modestly more actomyosin interactions and methylcellulose inhibited diffusion to sustain the complex while preserving a subset of encounters that do not overlap in time on a single actin filament. A single myosin step is isolated in time and space then characterized using super-resolution. The approach provides quick, quantitative, and inexpensive step-size measurement for low duty cycle muscle myosin. PMID:23383646

  20. Dislocation-induced Charges in Quantum Dots: Step Alignment and Radiative Emission

    NASA Technical Reports Server (NTRS)

    Leon, R.; Okuno, J.; Lawton, R.; Stevens-Kalceff, M.; Phillips, M.; Zou, J.; Cockayne, D.; Lobo, C.

    1999-01-01

    A transition between two types of step alignment was observed in a multilayered InGaAs/GaAs quantum-dot (QD) structure. A change to larger QD sizes in smaller concentrations occurred after formation of a dislocation array.

  1. Effects of interventions on normalizing step width during self-paced dual-belt treadmill walking with virtual reality, a randomised controlled trial.

    PubMed

    Oude Lansink, I L B; van Kouwenhove, L; Dijkstra, P U; Postema, K; Hijmans, J M

    2017-10-01

    Step width is increased during dual-belt treadmill walking, in self-paced mode with virtual reality. Generally a familiarization period is thought to be necessary to normalize step width. The aim of this randomised study was to analyze the effects of two interventions on step width, to reduce the familiarization period. We used the GRAIL (Gait Real-time Analysis Interactive Lab), a dual-belt treadmill with virtual reality in the self-paced mode. Thirty healthy young adults were randomly allocated to three groups and asked to walk at their preferred speed for 5min. In the first session, the control-group received no intervention, the 'walk-on-the-line'-group was instructed to walk on a line, projected on the between-belt gap of the treadmill and the feedback-group received feedback about their current step width and were asked to reduce it. Interventions started after 1min and lasted 1min. During the second session, 7-10days later, no interventions were given. Linear mixed modeling showed that interventions did not have an effect on step width after the intervention period in session 1. Initial step width (second 30s) of session 1 was larger than initial step width of session 2. Step width normalized after 2min and variation in step width stabilized after 1min. Interventions do not reduce step width after intervention period. A 2-min familiarization period is sufficient to normalize and stabilize step width, in healthy young adults, regardless of interventions. A standardized intervention to normalize step width is not necessary. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Step-and-Repeat Nanoimprint-, Photo- and Laser Lithography from One Customised CNC Machine.

    PubMed

    Greer, Andrew Im; Della-Rosa, Benoit; Khokhar, Ali Z; Gadegaard, Nikolaj

    2016-12-01

    The conversion of a computer numerical control machine into a nanoimprint step-and-repeat tool with additional laser- and photolithography capacity is documented here. All three processes, each demonstrated on a variety of photoresists, are performed successfully and analysed so as to enable the reader to relate their known lithography process(es) to the findings. Using the converted tool, 1 cm(2) of nanopattern may be exposed in 6 s, over 3300 times faster than the electron beam equivalent. Nanoimprint tools are commercially available, but these can cost around 1000 times more than this customised computer numerical control (CNC) machine. The converted equipment facilitates rapid production and large area micro- and nanoscale research on small grants, ultimately enabling faster and more diverse growth in this field of science. In comparison to commercial tools, this converted CNC also boasts capacity to handle larger substrates, temperature control and active force control, up to ten times more curing dose and compactness. Actual devices are fabricated using the machine including an expanded nanotopographic array and microfluidic PDMS Y-channel mixers.

  3. Step-and-Repeat Nanoimprint-, Photo- and Laser Lithography from One Customised CNC Machine

    NASA Astrophysics Data System (ADS)

    Greer, Andrew IM; Della-Rosa, Benoit; Khokhar, Ali Z.; Gadegaard, Nikolaj

    2016-03-01

    The conversion of a computer numerical control machine into a nanoimprint step-and-repeat tool with additional laser- and photolithography capacity is documented here. All three processes, each demonstrated on a variety of photoresists, are performed successfully and analysed so as to enable the reader to relate their known lithography process(es) to the findings. Using the converted tool, 1 cm2 of nanopattern may be exposed in 6 s, over 3300 times faster than the electron beam equivalent. Nanoimprint tools are commercially available, but these can cost around 1000 times more than this customised computer numerical control (CNC) machine. The converted equipment facilitates rapid production and large area micro- and nanoscale research on small grants, ultimately enabling faster and more diverse growth in this field of science. In comparison to commercial tools, this converted CNC also boasts capacity to handle larger substrates, temperature control and active force control, up to ten times more curing dose and compactness. Actual devices are fabricated using the machine including an expanded nanotopographic array and microfluidic PDMS Y-channel mixers.

  4. Developing Codebooks as a New Tool to Analyze Students' ePortfolios

    ERIC Educational Resources Information Center

    Impedovo, Maria Antonietta; Ritella, Giuseppe; Ligorio, Maria Beatrice

    2013-01-01

    This paper describes a three-step method for the construction of codebooks meant for analyzing ePortfolio content. The first step produces a prototype based on qualitative analysis of very different ePortfolios from the same course. During the second step, the initial version of the codebook is tested on a larger sample and subsequently revised.…

  5. A three operator split-step method covering a larger set of non-linear partial differential equations

    NASA Astrophysics Data System (ADS)

    Zia, Haider

    2017-06-01

    This paper describes an updated exponential Fourier based split-step method that can be applied to a greater class of partial differential equations than previous methods would allow. These equations arise in physics and engineering, a notable example being the generalized derivative non-linear Schrödinger equation that arises in non-linear optics with self-steepening terms. These differential equations feature terms that were previously inaccessible to model accurately with low computational resources. The new method maintains a 3rd order error even with these additional terms and models the equation in all three spatial dimensions and time. The class of non-linear differential equations that this method applies to is shown. The method is fully derived and implementation of the method in the split-step architecture is shown. This paper lays the mathematical ground work for an upcoming paper employing this method in white-light generation simulations in bulk material.

  6. Full allogeneic fusion of embryos in a holothuroid echinoderm.

    PubMed

    Gianasi, Bruno L; Hamel, Jean-François; Mercier, Annie

    2018-05-30

    Whole-body chimaeras (organisms composed of genetically distinct cells) have been directly observed in modular/colonial organisms (e.g. corals, sponges, ascidians); whereas in unitary deuterostosmes (including mammals) they have only been detected indirectly through molecular analysis. Here, we document for the first time the step-by-step development of whole-body chimaeras in the holothuroid Cucumaria frondosa , a unitary deuterostome belonging to the phylum Echinodermata. To the best of our knowledge, this is the most derived unitary metazoan in which direct investigation of zygote fusibility has been undertaken. Fusion occurred among hatched blastulae, never during earlier (unhatched) or later (larval) stages. The fully fused chimaeric propagules were two to five times larger than non-chimaeric embryos. Fusion was positively correlated with propagule density and facilitated by the natural tendency of early embryos to agglomerate. The discovery of natural chimaerism in a unitary deuterostome that possesses large externally fertilized eggs provides a framework to explore key aspects of evolutionary biology, histocompatibility and cell transplantation in biomedical research. © 2018 The Author(s).

  7. The Mw=8.8 Maule earthquake aftershock sequence, event catalog and locations

    NASA Astrophysics Data System (ADS)

    Meltzer, A.; Benz, H.; Brown, L.; Russo, R. M.; Beck, S. L.; Roecker, S. W.

    2011-12-01

    The aftershock sequence of the Mw=8.8 Maule earthquake off the coast of Chile in February 2010 is one of the most well-recorded aftershock sequences from a great megathrust earthquake. Immediately following the Maule earthquake, teams of geophysicists from Chile, France, Germany, Great Britain and the United States coordinated resources to capture aftershocks and other seismic signals associated with this significant earthquake. In total, 91 broadband, 48 short period, and 25 accelerometers stations were deployed above the rupture zone of the main shock from 33-38.5°S and from the coast to the Andean range front. In order to integrate these data into a unified catalog, the USGS National Earthquake Information Center develop procedures to use their real-time seismic monitoring system (Bulletin Hydra) to detect, associate, location and compute earthquake source parameters from these stations. As a first step in the process, the USGS has built a seismic catalog of all M3.5 or larger earthquakes for the time period of the main aftershock deployment from March 2010-October 2010. The catalog includes earthquake locations, magnitudes (Ml, Mb, Mb_BB, Ms, Ms_BB, Ms_VX, Mc), associated phase readings and regional moment tensor solutions for most of the M4 or larger events. Also included in the catalog are teleseismic phases and amplitude measures and body-wave MT and CMT solutions for the larger events, typically M5.5 and larger. Tuning of automated detection and association parameters should allow a complete catalog of events to approximately M2.5 or larger for that dataset of more than 164 stations. We characterize the aftershock sequence in terms of magnitude, frequency, and location over time. Using the catalog locations and travel times as a starting point we use double difference techniques to investigate relative locations and earthquake clustering. In addition, phase data from candidate ground truth events and modeling of surface waves can be used to calibrate the velocity structure of central Chile to improve the real-time monitoring.

  8. A Semi-implicit Method for Resolution of Acoustic Waves in Low Mach Number Flows

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2002-09-01

    A semi-implicit numerical method for time accurate simulation of compressible flow is presented. By extending the low Mach number pressure correction method, a Helmholtz equation for pressure is obtained in the case of compressible flow. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity, resulting in significant efficiency gains. Use of a discretization that is centered in both time and space results in zero artificial damping of acoustic waves. The method is attractive for problems in which Mach numbers are low, and the acoustic waves of most interest are those having low frequency, such as acoustic combustion instabilities. Both of these characteristics suggest the use of time steps larger than those allowable by an acoustic CFL limitation. In some cases it may be desirable to include a small amount of numerical dissipation to eliminate oscillations due to small-wavelength, high-frequency, acoustic modes, which are not of interest; therefore, a provision for doing this in a controlled manner is included in the method. Results of the method for several model problems are presented, and the performance of the method in a large eddy simulation is examined.

  9. Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora

    NASA Astrophysics Data System (ADS)

    Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke

    The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.

  10. Core-shell TiO2@ZnO nanorods for efficient ultraviolet photodetection.

    PubMed

    Panigrahi, Shrabani; Basak, Durga

    2011-05-01

    Core-shell TiO(2)@ZnO nanorods (NRs) have been fabricated by a simple two step method: growth of ZnO NRs' array by an aqueous chemical technique and then coating of the NRs with a solution of titanium isopropoxide [Ti(OC(3)H(7))(4)] followed by a heating step to form the shell. The core-shell nanocomposites are composed of single-crystalline ZnO NRs, coated with a thin TiO(2) shell layer obtained by varying the number of coatings (one, three and five times). The ultraviolet (UV) emission intensity of the nanocomposite is largely quenched due to an efficient electron-hole separation reducing the band-to-band recombinations. The UV photoconductivity of the core-shell structure with three times TiO(2) coating has been largely enhanced due to photoelectron transfer between the core and the shell. The UV photosensitivity of the nanocomposite becomes four times larger while the photocurrent decay during steady UV illumination has been decreased almost by 7 times compared to the as-grown ZnO NRs indicating high efficiency of these core-shell structures as UV sensors. © The Royal Society of Chemistry 2011

  11. The Effect of Backward-Facing Step Height on Instability Growth and Breakdown in Swept Wing Boundary-Layer Transition

    NASA Technical Reports Server (NTRS)

    Eppink, Jenna L.; Wlezien, Richard W.; King, Rudolph A.; Choudhari, Meelan

    2015-01-01

    A low-speed experiment was performed on a swept at plate model with an imposed pressure gradient to determine the effect of a backward-facing step on transition in a stationary-cross flow dominated flow. Detailed hot-wire boundary-layer measurements were performed for three backward-facing step heights of approximately 36, 45, and 49% of the boundary-layer thickness at the step. These step heights correspond to a subcritical, nearly-critical, and critical case. Three leading-edge roughness configurations were tested to determine the effect of stationary-cross flow amplitude on transition. The step caused a local increase in amplitude of the stationary cross flow for the two larger step height cases, but farther downstream the amplitude decreased and remained below the baseline amplitude. The smallest step caused a slight local decrease in amplitude of the primary stationary cross flow mode, but the amplitude collapsed back to the baseline case far downstream of the step. The effect of the step on the amplitude of the primary cross flow mode increased with step height, however, the stationary cross flow amplitudes remained low and thus, stationary cross flow was not solely responsible for transition. Unsteady disturbances were present downstream of the step for all three step heights, and the amplitudes increased with increasing step height. The only exception is that the lower frequency (traveling crossflow-like) disturbance was not present in the lowest step height case. Positive and negative spikes in instantaneous velocity began to occur for the two larger step height cases and then grew in number and amplitude downstream of reattachment, eventually leading to transition. The number and amplitude of spikes varied depending on the step height and cross flow amplitude. Despite the low amplitude of the disturbances in the intermediate step height case, breakdown began to occur intermittently and the flow underwent a long transition region.

  12. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  13. Adaption of the temporal correlation coefficient calculation for temporal networks (applied to a real-world pig trade network).

    PubMed

    Büttner, Kathrin; Salau, Jennifer; Krieter, Joachim

    2016-01-01

    The average topological overlap of two graphs of two consecutive time steps measures the amount of changes in the edge configuration between the two snapshots. This value has to be zero if the edge configuration changes completely and one if the two consecutive graphs are identical. Current methods depend on the number of nodes in the network or on the maximal number of connected nodes in the consecutive time steps. In the first case, this methodology breaks down if there are nodes with no edges. In the second case, it fails if the maximal number of active nodes is larger than the maximal number of connected nodes. In the following, an adaption of the calculation of the temporal correlation coefficient and of the topological overlap of the graph between two consecutive time steps is presented, which shows the expected behaviour mentioned above. The newly proposed adaption uses the maximal number of active nodes, i.e. the number of nodes with at least one edge, for the calculation of the topological overlap. The three methods were compared with the help of vivid example networks to reveal the differences between the proposed notations. Furthermore, these three calculation methods were applied to a real-world network of animal movements in order to detect influences of the network structure on the outcome of the different methods.

  14. A New Insight into the Mechanism of NADH Model Oxidation by Metal Ions in Non-Alkaline Media.

    PubMed

    Yang, Jin-Dong; Chen, Bao-Long; Zhu, Xiao-Qing

    2018-06-11

    For a long time, it has been controversial that the three-step (e-H+-e) or two-step (e-H•) mechanism was used for the oxidations of NADH and its models by metal ions in non-alkaline media. The latter mechanism has been accepted by the majority of researchers. In this work, 1-benzyl-1,4-dihydronicotinamide (BNAH) and 1-phenyl-l,4-dihydronicotinamide (PNAH) are used as NADH models, and ferrocenium (Fc+) metal ion as an electron acceptor. The kinetics for oxidations of the NADH models by Fc+ in pure acetonitrile were monitored by using UV-Vis absorption and quadratic relationship between of kobs and the concentrations of NADH models were found for the first time. The rate expression of the reactions developed according to the three-step mechanism is quite consistent with the quadratic curves. The rate constants, thermodynamic driving forces and KIEs of each elementary step for the reactions were estimated. All the results supported the three-step mechanism. The intrinsic kinetic barriers of the proton transfer from BNAH+• to BNAH and the hydrogen atom transfer from BNAH+• to BNAH+• were estimated, the results showed that the former is 11.8 kcal/mol, and the latter is larger than 24.3 kcal/mol. It is the large intrinsic kinetic barrier of the hydrogen atom transfer that makes the reactions choose the three-step rather than two-step mechanism. Further investigation of the factors affecting the intrinsic kinetic barrier of chemical reactions indicated that the large intrinsic kinetic barrier of the hydrogen atom transfer originated from the repulsion of positive charges between BNAH+• and BNAH+•. The greatest contribution of this work is the discovery of the quadratic dependence of kobs on the concentrations of the NADH models, which is inconsistent with the conventional viewpoint of the "two-step mechanism" on the oxidations of NADH and its models by metal ions in the non-alkaline media.

  15. Gain of postural responses increases in response to real and anticipated pain.

    PubMed

    Hodges, Paul W; Tsao, Henry; Sims, Kevin

    2015-09-01

    This study tested two contrasting theories of adaptation of postural control to pain. One proposes alteration to the postural strategy including inhibition of muscles that produce painful movement; another proposes amplification of the postural adjustment to recruit strategies normally reserved for higher load. This study that aimed to determine which of these alternatives best explains pain-related adaptation of the hip muscle activity associated with stepping down from steps of increasing height adaptation of postural control to increasing load was evaluated from hip muscle electromyography (fine-wire and surface electrodes) as ten males stepped from steps of increasing height (i.e. increasing load). In one set of trials, participants stepped from a low step (5 cm) and pain was induced by noxious electrical stimulation over the sacrum triggered from foot contact with a force plate or was anticipated. Changes in EMG amplitude and onset timing were compared between conditions. Hip muscle activation was earlier and larger when stepping from higher steps. Although ground reaction forces (one of the determinants of joint load) were unchanged before, during and after pain, trials with real or anticipated noxious stimulation were accompanied by muscle activity indistinguishable from that normally reserved for higher steps (EMG amplitude increased from 9 to 17 % of peak). These data support the notion that muscle activation for postural control is augmented when challenged by real/anticipated noxious stimulation. Muscle activation was earlier and greater than that required for the task and is likely to create unnecessary joint loading. This could have long-term consequences if maintained.

  16. Grammar and the Lexicon. Working Papers in Linguistics 16.

    ERIC Educational Resources Information Center

    University of Trondheim Working Papers in Linguistics, 1993

    1993-01-01

    In this volume, five working papers are presented. "Minimal Signs and Grammar" (Lars Hellan) proposes that a significant part of the "production" of grammar is incremental, building larger and larger constructs, with lexical objects called minimal signs as the first steps. It also suggests that the basic lexical information in…

  17. CLARIPED: a new tool for risk classification in pediatric emergencies.

    PubMed

    Magalhães-Barbosa, Maria Clara de; Prata-Barbosa, Arnaldo; Alves da Cunha, Antonio José Ledo; Lopes, Cláudia de Souza

    2016-09-01

    To present a new pediatric risk classification tool, CLARIPED, and describe its development steps. Development steps: (i) first round of discussion among experts, first prototype; (ii) pre-test of reliability, 36 hypothetical cases; (iii) second round of discussion to perform adjustments; (iv) team training; (v) pre-test with patients in real time; (vi) third round of discussion to perform new adjustments; (vii) final pre-test of validity (20% of medical treatments in five days). CLARIPED features five urgency categories: Red (Emergency), Orange (very urgent), Yellow (urgent), Green (little urgent) and Blue (not urgent). The first classification step includes the measurement of four vital signs (Vipe score); the second step consists in the urgency discrimination assessment. Each step results in assigning a color, selecting the most urgent one for the final classification. Each color corresponds to a maximum waiting time for medical care and referral to the most appropriate physical area for the patient's clinical condition. The interobserver agreement was substantial (kappa=0.79) and the final pre-test, with 82 medical treatments, showed good correlation between the proportion of patients in each urgency category and the number of used resources (p<0.001). CLARIPED is an objective and easy-to-use tool for simple risk classification, of which pre-tests suggest good reliability and validity. Larger-scale studies on its validity and reliability in different health contexts are ongoing and can contribute to the implementation of a nationwide pediatric risk classification system. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  18. Subsystem real-time time dependent density functional theory.

    PubMed

    Krishtal, Alisa; Ceresoli, Davide; Pavanello, Michele

    2015-04-21

    We present the extension of Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) to real-time Time Dependent Density Functional Theory (rt-TDDFT). FDE is a DFT-in-DFT embedding method that allows to partition a larger Kohn-Sham system into a set of smaller, coupled Kohn-Sham systems. Additional to the computational advantage, FDE provides physical insight into the properties of embedded systems and the coupling interactions between them. The extension to rt-TDDFT is done straightforwardly by evolving the Kohn-Sham subsystems in time simultaneously, while updating the embedding potential between the systems at every time step. Two main applications are presented: the explicit excitation energy transfer in real time between subsystems is demonstrated for the case of the Na4 cluster and the effect of the embedding on optical spectra of coupled chromophores. In particular, the importance of including the full dynamic response in the embedding potential is demonstrated.

  19. Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Wood, William A.; vanLeer, Bram

    1999-01-01

    A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.

  20. Strategic Planning for Health Care Cost Controls in a Constantly Changing Environment.

    PubMed

    Hembree, William E

    2015-01-01

    Health care cost increases are showing a resurgence. Despite recent years' comparatively modest increases, the projections for 2015 cost increases range from 6.6% to 7%--three to four times larger than 2015's expected underlying inflation. This resurgence is just one of many rapidly changing external and internal challenges health plan sponsors must overcome (and this resurgence advances the date when the majority of employers will trigger the "Cadillac tax"). What's needed is a planning approach that is effective in overcoming all known and yet-to-be-discovered challenges, not just affordability. This article provides detailed guidance in adopting six proven strategic planning steps. Following these steps will proactively and effectively create a flexible strategic plan for the present and future of employers' health plans that will withstand all internal and external challenges.

  1. Influence of restricted vision and knee joint range of motion on gait properties during level walking and stair ascent and descent.

    PubMed

    Demura, Tomohiro; Demura, Shin-ich

    2011-01-01

    Because elderly individuals experience marked declines in various physical functions (e.g., vision, joint function) simultaneously, it is difficult to clarify the individual effects of these functional declines on walking. However, by imposing vision and joint function restrictions on young men, the effects of these functional declines on walking can be clarified. The authors aimed to determine the effect of restricted vision and range of motion (ROM) of the knee joint on gait properties while walking and ascending or descending stairs. Fifteen healthy young adults performed level walking and stair ascent and descent during control, vision restriction, and knee joint ROM restriction conditions. During level walking, walking speed and step width decreased, and double support time increased significantly with vision and knee joint ROM restrictions. Stance time, step width, and walking angle increased only with knee joint ROM restriction. Stance time, swing time, and double support time were significantly longer in level walking, stair descent, and stair ascent, in that order. The effects of vision and knee joint ROM restrictions were significantly larger than the control conditions. In conclusion, vision and knee joint ROM restrictions affect gait during level walking and stair ascent and descent. This effect is marked in stair ascent with knee joint ROM restriction.

  2. Impact of SCBA size and fatigue from different firefighting work cycles on firefighter gait.

    PubMed

    Kesler, Richard M; Bradley, Faith F; Deetjen, Grace S; Angelini, Michael J; Petrucci, Matthew N; Rosengren, Karl S; Horn, Gavin P; Hsiao-Wecksler, Elizabeth T

    2018-04-04

    Risk of slips, trips and falls in firefighters maybe influenced by the firefighter's equipment and duration of firefighting. This study examined the impact of a four self-contained breathing apparatus (SCBA) three SCBA of increasing size and a prototype design and three work cycles one bout (1B), two bouts with a five-minute break (2B) and two bouts back-to-back (BB) on gait in 30 firefighters. Five gait parameters (double support time, single support time, stride length, step width and stride velocity) were examined pre- and post-firefighting activity. The two largest SCBA resulted in longer double support times relative to the smallest SCBA. Multiple bouts of firefighting activity resulted in increased single and double support time and decreased stride length, step width and stride velocity. These results suggest that with larger SCBA or longer durations of activity, firefighters may adopt more conservative gait patterns to minimise fall risk. Practitioner Summary: The effects of four self-contained breathing apparatus (SCBA) and three work cycles on five gait parameters were examined pre- and post-firefighting activity. Both SCBA size and work cycle affected gait. The two largest SCBA resulted in longer double support times. Multiple bouts of activity resulted in more conservative gait patterns.

  3. Modeling ultrasound propagation through material of increasing geometrical complexity.

    PubMed

    Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen

    2018-06-01

    Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Self-Organization of Vocabularies under Different Interaction Orders.

    PubMed

    Vera, Javier

    2017-01-01

    Traditionally, the formation of vocabularies has been studied by agent-based models (primarily, the naming game) in which random pairs of agents negotiate word-meaning associations at each discrete time step. This article proposes a first approximation to a novel question: To what extent is the negotiation of word-meaning associations influenced by the order in which agents interact? Automata networks provide the adequate mathematical framework to explore this question. Computer simulations suggest that on two-dimensional lattices the typical features of the formation of word-meaning associations are recovered under random schemes that update small fractions of the population at the same time; by contrast, if larger subsets of the population are updated, a periodic behavior may appear.

  5. Growth from Solutions: Kink dynamics, Stoichiometry, Face Kinetics and stability in turbulent flow

    NASA Technical Reports Server (NTRS)

    Chernov, A. A.; DeYoreo, J. J.; Rashkovich, L. N.; Vekilov, P. G.

    2005-01-01

    1. Kink dynamics. The first segment of a polygomized dislocation spiral step measured by AFM demonstrates up to 60% scattering in the critical length l*- the length when the segment starts to propagate. On orthorhombic lysozyme, this length is shorter than that the observed interkink distance. Step energy from the critical segment length based on the Gibbs-Thomson law (GTL), l* = 20(omega)alpha/(Delta)mu is several times larger than the energy from 2D nucleation rate. Here o is tine building block specific voiume, a is the step riser specific free energy, Delta(mu) is the crystallization driving force. These new data support our earlier assumption that the classical Frenkel, Burton -Cabrera-Frank concept of the abundant kink supply by fluctuations is not applicable for strongly polygonized steps. Step rate measurements on brushite confirms that statement. This is the1D nucleation of kinks that control step propagation. The GTL is valid only if l*

  6. Development and acceleration of unstructured mesh-based cfd solver

    NASA Astrophysics Data System (ADS)

    Emelyanov, V.; Karpenko, A.; Volkov, K.

    2017-06-01

    The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  7. Dynamic and functional balance tasks in subjects with persistent whiplash: a pilot trial.

    PubMed

    Stokell, Raina; Yu, Annie; Williams, Katrina; Treleaven, Julia

    2011-08-01

    Disturbances in static balance have been demonstrated in subjects with persistent whiplash. Some also report loss of balance and falls. These disturbances may contribute to difficulties in dynamic tasks. The aim of this study was to determine whether subjects with whiplash had deficits in dynamic and functional balance tasks when compared to a healthy control group. Twenty subjects with persistent pain following a whiplash injury and twenty healthy controls were assessed in single leg stance with eyes open and closed, the step test, Fukuda stepping test, tandem walk on a firm and soft surface, Singleton test with eyes open and closed, a stair walking test and the timed 10 m walk with and without head movement. Subjects with whiplash demonstrated significant deficits (p < 0.01) in single leg stance with eyes closed, the step test, tandem walk on a firm and soft surface, stair walking and the timed 10 m walk with and without head movement when compared to the control subjects. Specific assessment and rehabilitation directed towards improving these deficits may need to be considered in the management of patients with persistent whiplash if these results are confirmed in a larger cohort. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  8. Pivoting neuromuscular control and proprioception in females and males.

    PubMed

    Lee, Song Joo; Ren, Yupeng; Kang, Sang Hoon; Geiger, François; Zhang, Li-Qun

    2015-04-01

    Noncontact ACL injuries occur most commonly in pivoting sports and are much more frequent in females than in males. However, information on sex differences in proprioceptive acuity under weight-bearing and leg neuromuscular control in pivoting is scarce. The objective of this study was to investigate sex differences in pivoting neuromuscular control during strenuous stepping tasks and proprioceptive acuity under weight-bearing. 21 male and 22 female subjects were recruited to evaluate pivoting proprioceptive acuity under weight-bearing, and pivoting neuromuscular control (in terms of leg pivoting instability, stiffness, maximum internal and external pivoting angles, and entropy of time-to-peak EMG in lower limb muscles) during strenuous stepping tasks performed on a novel offaxis elliptical trainer. Compared to males, females had significantly lower proprioceptive acuity under weight-bearing in both internal and external pivoting directions, higher pivoting instability, larger maximum internal pivoting angle, lower leg pivoting stiffness, and higher entropy of time-to-peak EMG in the gastrocnemius muscles during strenuous stepping tasks with internal and external pivoting perturbations. Results of this study may help us better understand factors contributing to ACL injuries in females and males, develop training strategies to improve pivoting neuromuscular control and proprioceptive acuity, and potentially reduce ACL and lower-limb musculoskeletal injuries.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dustin Popp; Zander Mausolff; Sedat Goluoglu

    We are proposing to use the code, TDKENO, to model TREAT. TDKENO solves the time dependent, three dimensional Boltzmann transport equation with explicit representation of delayed neutrons. Instead of directly integrating this equation, the neutron flux is factored into two components – a rapidly varying amplitude equation and a slowly varying shape equation and each is solved separately on different time scales. The shape equation is solved using the 3D Monte Carlo transport code KENO, from Oak Ridge National Laboratory’s SCALE code package. Using the Monte Carlo method to solve the shape equation is still computationally intensive, but the operationmore » is only performed when needed. The amplitude equation is solved deterministically and frequently, so the solution gives an accurate time-dependent solution without having to repeatedly We have modified TDKENO to incorporate KENO-VI so that we may accurately represent the geometries within TREAT. This paper explains the motivation behind using generalized geometry, and provides the results of our modifications. TDKENO uses the Improved Quasi-Static method to accomplish this. In this method, the neutron flux is factored into two components. One component is a purely time-dependent and rapidly varying amplitude function, which is solved deterministically and very frequently (small time steps). The other is a slowly varying flux shape function that weakly depends on time and is only solved when needed (significantly larger time steps).« less

  10. Formation of Cyclic Steps due to the Surge-type Turbidity Currents in a Flume Experiment

    NASA Astrophysics Data System (ADS)

    Yokokawa, M.

    2016-12-01

    Supercritical turbidity currents often form crescentic step-like wavy structures, which have been found at the submarine canyons, and deltaic environments. Field observations of turbidity currents and seabed topography on the Squamish delta in British Columbia, Canada revealed that cyclic steps formed by the surge-type turbidity currents (e.g., Hughes Clarke et al., 2012a; 2012b; 2014). The high-density portion of the flow, which affects the sea floor morphology, lasted only 30-60 seconds. The questions arise if we can reconstruct paleo-flow condition from the morphologic features of these steps. We don't know answers right now because there have been no experiments about the formative conditions of cyclic steps due to the "surge-type" turbidity currents. Here we did preliminary experiments on the formation of cyclic steps due to the multiple surge-type density currents, and compare the morphology of the steps with those of Squamish delta. First of all, we measured wave length and wave height of each step from profiles of each channels of Squamish delta from the elevation data and calculated the wave steepness. Wave steepness of active steps ranges about 0.05 to 0.15, which is relatively larger compare with those of other sediment waves. And in general, wave steepness is larger in the proximal region. The experiments had been performed at Osaka Institute of Technology. A flume, which is 7.0 m long, 0.3 m deep and 2 cm wide, was suspended in a larger tank, which is 7.6 m long, 1.2 m deep and 0.3 m wide, filled with water. The inner flume tilted at 7 degrees. Mixture of salt water (1.17 g/cm3) and plastic particles (1.5 g/cm3, 0.1-0.18 mm in diameter), whose weight ratio is 10:1, poured into the upstream end of the inner flume from head tank for 5 seconds. Discharge of the mixture was 240mL/s, thus for 5seconds 1200mL of mixture was released into the inner flume. We made 130 surges. As a result, four steps were formed ultimately, which were moving toward upstream direction. Wave steepness of the steps increases as number of runs increases, and reached to close to the value of Squamish. We did the other experiment for the continuous turbidity current. The conditions of the experiment were same as those of surge-type experiment except the duration of the run, which was 990 seconds, but it did not form cyclic steps.

  11. Performance monitoring and response conflict resolution associated with choice stepping reaction tasks.

    PubMed

    Watanabe, Tatsunori; Tsutou, Kotaro; Saito, Kotaro; Ishida, Kazuto; Tanabe, Shigeo; Nojima, Ippei

    2016-11-01

    Choice reaction requires response conflict resolution, and the resolution processes that occur during a choice stepping reaction task undertaken in a standing position, which requires maintenance of balance, may be different to those processes occurring during a choice reaction task performed in a seated position. The study purpose was to investigate the resolution processes during a choice stepping reaction task at the cortical level using electroencephalography and compare the results with a control task involving ankle dorsiflexion responses. Twelve young adults either stepped forward or dorsiflexed the ankle in response to a visual imperative stimulus presented on a computer screen. We used the Simon task and examined the error-related negativity (ERN) that follows an incorrect response and the correct-response negativity (CRN) that follows a correct response. Error was defined as an incorrect initial weight transfer for the stepping task and as an incorrect initial tibialis anterior activation for the control task. Results revealed that ERN and CRN amplitudes were similar in size for the stepping task, whereas the amplitude of ERN was larger than that of CRN for the control task. The ERN amplitude was also larger in the stepping task than the control task. These observations suggest that a choice stepping reaction task involves a strategy emphasizing post-response conflict and general performance monitoring of actual and required responses and also requires greater cognitive load than a choice dorsiflexion reaction. The response conflict resolution processes appear to be different for stepping tasks and reaction tasks performed in a seated position.

  12. Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6

    NASA Astrophysics Data System (ADS)

    Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.

    2017-01-01

    This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.

  13. A novel grid-based mesoscopic model for evacuation dynamics

    NASA Astrophysics Data System (ADS)

    Shi, Meng; Lee, Eric Wai Ming; Ma, Yi

    2018-05-01

    This study presents a novel grid-based mesoscopic model for evacuation dynamics. In this model, the evacuation space is discretised into larger cells than those used in microscopic models. This approach directly computes the dynamic changes crowd densities in cells over the course of an evacuation. The density flow is driven by the density-speed correlation. The computation is faster than in traditional cellular automata evacuation models which determine density by computing the movements of each pedestrian. To demonstrate the feasibility of this model, we apply it to a series of practical scenarios and conduct a parameter sensitivity study of the effect of changes in time step δ. The simulation results show that within the valid range of δ, changing δ has only a minor impact on the simulation. The model also makes it possible to directly acquire key information such as bottleneck areas from a time-varied dynamic density map, even when a relatively large time step is adopted. We use the commercial software AnyLogic to evaluate the model. The result shows that the mesoscopic model is more efficient than the microscopic model and provides more in-situ details (e.g., pedestrian movement pattern) than the macroscopic models.

  14. Extreme learning machine for reduced order modeling of turbulent geophysical flows.

    PubMed

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  15. Extreme learning machine for reduced order modeling of turbulent geophysical flows

    NASA Astrophysics Data System (ADS)

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  16. Age-related differences in the maintenance of frontal plane dynamic stability while stepping to targets

    PubMed Central

    Hurt, Christopher P.; Grabiner, Mark D.

    2015-01-01

    Older adults may be vulnerable to frontal plane dynamic instability, which is of clinical significance. The purpose of the current investigation was to examine the age-related differences in frontal plane dynamic stability by quantifying the margin of stability and hip abductor moment generation of subjects performing a single crossover step and sidestep to targets that created three different step widths during forward locomotion. Nineteen young adults (9 males, age: 22.9±3.1 years, height: 174.3±10.2 cm, mass: 71.7±13.0 kg) and 18 older adults (9 males, age: 72.8±5.2 years, height: 174.9±8.6 cm, mass: 78.0±16.3 kg) participated. For each walking trial, subjects performed a single laterally-directed step to a target on a force plate. Subjects were instructed to “perform the lateral step and keep walking forward”. The peak hip abductor moment of the stepping limb was 42% larger by older adults compared to younger adults (p<0.001). Older adults were also more stable than younger adults at all step targets (p<0.001). Older adults executed the lateral step with slower forward-directed and lateral-directed velocity despite similar step widths. Age-related differences in hip abduction moments may reflect greater muscular effort by older adults to reduce the likelihood of becoming unstable. The results of this investigation, in which subjects performed progressively larger lateral-directed steps, provide evidence that older adults may not be more laterally unstable than younger adults. However, age-related differences in this study could also reflect a compensatory strategy by older adults to ensure stability while performing this task. PMID:25627870

  17. Age-related differences in the maintenance of frontal plane dynamic stability while stepping to targets.

    PubMed

    Hurt, Christopher P; Grabiner, Mark D

    2015-02-26

    Older adults may be vulnerable to frontal plane dynamic instability, which is of clinical significance. The purpose of the current investigation was to examine the age-related differences in frontal plane dynamic stability by quantifying the margin of stability and hip abductor moment generation of subjects performing a single crossover step and sidestep to targets that created three different step widths during forward locomotion. Nineteen young adults (9 males, age: 22.9±3.1 years, height: 174.3±10.2cm, mass: 71.7±13.0kg) and 18 older adults (9 males, age: 72.8±5.2 years, height: 174.9±8.6cm, mass: 78.0±16.3kg) participated. For each walking trial, subjects performed a single laterally-directed step to a target on a force plate. Subjects were instructed to "perform the lateral step and keep walking forward". The peak hip abductor moment of the stepping limb was 42% larger by older adults compared to younger adults (p<0.001). Older adults were also more stable than younger adults at all step targets (p<0.001). Older adults executed the lateral step with slower forward-directed and lateral-directed velocity despite similar step widths. Age-related differences in hip abduction moments may reflect greater muscular effort by older adults to reduce the likelihood of becoming unstable. The results of this investigation, in which subjects performed progressively larger lateral-directed steps, provide evidence that older adults may not be more laterally unstable than younger adults. However, age-related differences in this study could also reflect a compensatory strategy by older adults to ensure stability while performing this task. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Text Classification for Organizational Researchers

    PubMed Central

    Kobayashi, Vladimer B.; Mol, Stefan T.; Berkers, Hannah A.; Kismihók, Gábor; Den Hartog, Deanne N.

    2017-01-01

    Organizations are increasingly interested in classifying texts or parts thereof into categories, as this enables more effective use of their information. Manual procedures for text classification work well for up to a few hundred documents. However, when the number of documents is larger, manual procedures become laborious, time-consuming, and potentially unreliable. Techniques from text mining facilitate the automatic assignment of text strings to categories, making classification expedient, fast, and reliable, which creates potential for its application in organizational research. The purpose of this article is to familiarize organizational researchers with text mining techniques from machine learning and statistics. We describe the text classification process in several roughly sequential steps, namely training data preparation, preprocessing, transformation, application of classification techniques, and validation, and provide concrete recommendations at each step. To help researchers develop their own text classifiers, the R code associated with each step is presented in a tutorial. The tutorial draws from our own work on job vacancy mining. We end the article by discussing how researchers can validate a text classification model and the associated output. PMID:29881249

  19. Establishment of a low recycling state with full density control by active pumping of the closed helical divertor at LHD

    NASA Astrophysics Data System (ADS)

    Motojima, G.; Masuzaki, S.; Tanaka, H.; Morisaki, T.; Sakamoto, R.; Murase, T.; Tsuchibushi, Y.; Kobayashi, M.; Schmitz, O.; Shoji, M.; Tokitani, M.; Yamada, H.; Takeiri, Y.; The LHD Experiment Group

    2018-01-01

    Superior control of particle recycling and hence full governance of plasma density has been established in the Large Helical Device (LHD) using largely enhanced active pumping of the closed helical divertor (CHD). In-vessel cryo-sorption pumping systems inside the CHD in five out of ten inner toroidal divertor sections have been developed and installed step by step in the LHD. The total effective pumping speed obtained was 67  ±  5 m3 s-1 in hydrogen, which is approximately seven times larger than previously obtained. As a result, a low recycling state was observed with CHD pumping for the first time in LHD featuring excellent density control even under intense pellet fueling conditions. A global particle confinement time (τ p* ) is used for comparison of operation with and without the CHD pumping. The τ p* was evaluated from the density decay after the fueling of hydrogen pellet injection or gas puffing in NBI plasmas. A reliably low base density before the fueling and short τ p* after the fueling were obtained during the CHD pumping, demonstrating for the first time full control of the particle balance with active pumping in the CHD.

  20. Effect of resource constraints on intersimilar coupled networks.

    PubMed

    Shai, S; Dobson, S

    2012-12-01

    Most real-world networks do not live in isolation but are often coupled together within a larger system. Recent studies have shown that intersimilarity between coupled networks increases the connectivity of the overall system. However, unlike connected nodes in a single network, coupled nodes often share resources, like time, energy, and memory, which can impede flow processes through contention when intersimilarly coupled. We study a model of a constrained susceptible-infected-recovered (SIR) process on a system consisting of two random networks sharing the same set of nodes, where nodes are limited to interact with (and therefore infect) a maximum number of neighbors at each epidemic time step. We obtain that, in agreement with previous studies, when no limit exists (regular SIR model), positively correlated (intersimilar) coupling results in a lower epidemic threshold than negatively correlated (interdissimilar) coupling. However, in the case of the constrained SIR model, the obtained epidemic threshold is lower with negatively correlated coupling. The latter finding differentiates our work from previous studies and provides another step towards revealing the qualitative differences between single and coupled networks.

  1. Effect of resource constraints on intersimilar coupled networks

    NASA Astrophysics Data System (ADS)

    Shai, S.; Dobson, S.

    2012-12-01

    Most real-world networks do not live in isolation but are often coupled together within a larger system. Recent studies have shown that intersimilarity between coupled networks increases the connectivity of the overall system. However, unlike connected nodes in a single network, coupled nodes often share resources, like time, energy, and memory, which can impede flow processes through contention when intersimilarly coupled. We study a model of a constrained susceptible-infected-recovered (SIR) process on a system consisting of two random networks sharing the same set of nodes, where nodes are limited to interact with (and therefore infect) a maximum number of neighbors at each epidemic time step. We obtain that, in agreement with previous studies, when no limit exists (regular SIR model), positively correlated (intersimilar) coupling results in a lower epidemic threshold than negatively correlated (interdissimilar) coupling. However, in the case of the constrained SIR model, the obtained epidemic threshold is lower with negatively correlated coupling. The latter finding differentiates our work from previous studies and provides another step towards revealing the qualitative differences between single and coupled networks.

  2. Intelligent cooperation: A framework of pedagogic practice in the operating room.

    PubMed

    Sutkin, Gary; Littleton, Eliza B; Kanter, Steven L

    2018-04-01

    Surgeons who work with trainees must address their learning needs without compromising patient safety. We used a constructivist grounded theory approach to examine videos of five teaching surgeries. Attending surgeons were interviewed afterward while watching cued videos of their cases. Codes were iteratively refined into major themes, and then constructed into a larger framework. We present a novel framework, Intelligent Cooperation, which accounts for the highly adaptive, iterative features of surgical teaching in the operating room. Specifically, we define Intelligent Cooperation as a sequence of coordinated exchanges between attending and trainee that accomplishes small surgical steps while simultaneously uncovering the trainee's learning needs. Intelligent Cooperation requires the attending to accurately determine learning needs, perform real-time needs assessment, provide critical scaffolding, and work with the learner to accomplish the next step in the surgery. This is achieved through intense, coordinated verbal and physical cooperation. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Low Temperature Metal Free Growth of Graphene on Insulating Substrates by Plasma Assisted Chemical Vapor Deposition

    PubMed Central

    Muñoz, R.; Munuera, C.; Martínez, J. I.; Azpeitia, J.; Gómez-Aleixandre, C.; García-Hernández, M.

    2016-01-01

    Direct growth of graphene films on dielectric substrates (quartz and silica) is reported, by means of remote electron cyclotron resonance plasma assisted chemical vapor deposition r-(ECR-CVD) at low temperature (650°C). Using a two step deposition process- nucleation and growth- by changing the partial pressure of the gas precursors at constant temperature, mostly monolayer continuous films, with grain sizes up to 500 nm are grown, exhibiting transmittance larger than 92% and sheet resistance as low as 900 Ω·sq-1. The grain size and nucleation density of the resulting graphene sheets can be controlled varying the deposition time and pressure. In additon, first-principles DFT-based calculations have been carried out in order to rationalize the oxygen reduction in the quartz surface experimentally observed. This method is easily scalable and avoids damaging and expensive transfer steps of graphene films, improving compatibility with current fabrication technologies. PMID:28070341

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald Martin

    The research studied one-step and two-step Isotope Separation on Line (ISOL) targets for future radioactive beam facilities with high driver-beam power through advanced computer simulations. As a target material uranium carbide in the form of foils was used because of increasing demand for actinide targets in rare-isotope beam facilities and because such material was under development in ISAC at TRIUMF when this project started. Simulations of effusion were performed for one-step and two step targets and the effects of target dimensions and foil matrix were studied. Diffusion simulations were limited by availability of diffusion parameters for UC x material atmore » reduced density; however, the viability of the combined diffusion?effusion simulation methodology was demonstrated and could be used to extract physical parameters such as diffusion coefficients and effusion delay times from experimental isotope release curves. Dissipation of the heat from the isotope-producing targets is the limiting factor for high-power beam operation both for the direct and two-step targets. Detailed target models were used to simulate proton beam interactions with the targets to obtain the fission rates and power deposition distributions, which were then applied in the heat transfer calculations to study the performance of the targets. Results indicate that a direct target, with specification matching ISAC TRIUMF target, could operate in 500-MeV proton beam at beam powers up to ~40 kW, producing ~8 10 13 fission/s with maximum temperature in UCx below 2200 C. Targets with larger radius allow higher beam powers and fission rates. For the target radius in the range 9-mm to 30-mm the achievable fission rate increases almost linearly with target radius, however, the effusion delay time also increases linearly with target radius.« less

  5. Relation between the microstructure and the electromagnetic properties of BaTiO3/Ni0.5Zn0.5Fe2O4 ceramic composite

    NASA Astrophysics Data System (ADS)

    Xiao, Bin; Tang, Yu; Ma, Guodong; Ma, Ning; Du, Piyi

    2015-06-01

    The microstructure-property relation in ferroelectric/ferromagnetic composite is investigated in detail, exemplified by typical sol-gel-derived 0.3BTO/0.7NZFO ceramic composite. The effect of microstructural factors including intergrain connectivity, grain size and interfaces on the dielectric and magnetic properties of the composite prepared by conventional ceramic method and three-step sintering method is discussed both experimentally and theoretically. It reveals that the dielectric behavior of the composite is controlled by a hybrid dielectric process that combines the contribution of Debye-like dipoles and Maxwell-Wagner (M-W or interfacial) polarization. Enhanced dielectric, magnetic and conductive behaviors appear in the composite with better intergrain connectivity and larger grain size derived by sol-gel route and three-step sintering method. The effective permittivity contributed by Debye-like dipoles exhibits a value of ~130,000 in three-step sintered composite, which is almost the same as that in conventionally sintered one, but that contributed by M-W response is much smaller in the former. Compared with conventionally prepared samples, the relaxation time ( τ) is 3.476 × 10-6 s, about one order of magnitude smaller, and the dc electrical conductivity is 3.890 × 10-3 S/m, one order of magnitude higher in three-step sintered composite. The minimum dielectric loss reveals almost the same (~0.2) for all samples, but shows distinguishable difference in low-frequency region. Meanwhile, an initial permeability of 84, twice as large as that of conventionally prepared composite and 56 % the value of single-phased NZFO ferrite (~150), and a saturation magnetization of 63.5 emu/g, 32 % higher than that of conventional one and approximately 84 % the value of single-phased NZFO ferrite (~76 emu/g), appear simultaneously in three-step sintered composite with larger grain size and better intergrain connectivity. It is clear that the discovery is helpful for establishing a more explicit view on the physics of multi-functional composite materials, while the composite with optimized microstructure is beneficial to be used as a high-performance material.

  6. The ELPA library: scalable parallel eigenvalue solutions for electronic structure theory and computational science.

    PubMed

    Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H

    2014-05-28

    Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intel's MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.

  7. Experimental and Analytical Evaluation of Stressing-Rate State Evolution in Rate-State Friction Laws

    NASA Astrophysics Data System (ADS)

    Bhattacharya, P.; Rubin, A. M.; Bayart, E.; Savage, H. M.; Marone, C.; Beeler, N. M.

    2013-12-01

    Standard rate and state friction laws fail to explain the full range of observations from laboratory friction experiments. A new state evolution law has been proposed by Nagata et al. (2012) that adds a linear stressing-rate-dependent term to the Dieterich (aging) law, which may provide a remedy. They introduce a parameter c that controls the contribution of the stressing rate to state evolution. We show through analytical approximations that the new law can transition between the responses of the traditional Dieterich (aging) and Ruina (slip) laws in velocity step up/down experiments when the value of c is tuned properly. In particular, for c = 0 the response is pure aging while for finite, non-zero c one observes slip law like behavior for small velocity jumps but aging law like response for larger jumps. The magnitude of the velocity jump required to see this transition between aging and slip behaviour increases as c increases. In the limit of c >> 1 the response to velocity steps becomes purely slip law like. In this limit, numerical simulations show that this law loses its appealing time dependent healing property. An approach using Markov Chain Monte Carlo parameter search on data for large magnitude velocity step tests reveals that it is only possible to determine a lower bound on c using datasets that are well explained by the slip law. For a dataset with velocity steps of two orders of magnitude on simulated fault gouge we find this lower bound to be c ≈ 10.0. This is significantly larger than c ≈ 2.0 used by Nagata et al. (2012) to fit their data (mainly bare rock experiments with smaller excursions from steady state than our dataset). Similar parameter estimation exercises on slide hold slide data reveal that none of the state evolution laws considered - Dieterich, Ruina, Kato-Tullis and Nagata - match the relevant features of the data. In particular, even the aging law predicts only the correct rate of healing for long hold times but not the correct amount of healing. For c = 10.0, the Nagata law shows significant slip dependence in healing rate for long hold times which is at odds with the lab data and similar to the slip law response. If one accepts frictional healing observed in the laboratory as a ';proper' analog for fault strengthening over the interseismic period, we conclude that none of the investigated state evolution laws provides a comprehensive and correct constitutive relation.

  8. Effective and efficient learning in the operating theater with intraoperative video-enhanced surgical procedure training.

    PubMed

    van Det, M J; Meijerink, W J H J; Hoff, C; Middel, B; Pierie, J P E N

    2013-08-01

    INtraoperative Video Enhanced Surgical procedure Training (INVEST) is a new training method designed to improve the transition from basic skills training in a skills lab to procedural training in the operating theater. Traditionally, the master-apprentice model (MAM) is used for procedural training in the operating theater, but this model lacks uniformity and efficiency at the beginning of the learning curve. This study was designed to investigate the effectiveness and efficiency of INVEST compared to MAM. Ten surgical residents with no laparoscopic experience were recruited for a laparoscopic cholecystectomy training curriculum either by the MAM or with INVEST. After a uniform course in basic laparoscopic skills, each trainee performed six cholecystectomies that were digitally recorded. For 14 steps of the procedure, an observer who was blinded for the type of training determined whether the step was performed entirely by the trainee (2 points), partially by the trainee (1 point), or by the supervisor (0 points). Time measurements revealed the total procedure time and the amount of effective procedure time during which the trainee acted as the operating surgeon. Results were compared between both groups. Trainees in the INVEST group were awarded statistically significant more points (115.8 vs. 70.2; p < 0.001) and performed more steps without the interference of the supervisor (46.6 vs. 18.8; p < 0.001). Total procedure time was not lengthened by INVEST, and the part performed by trainees was significantly larger (69.9 vs. 54.1 %; p = 0.004). INVEST enhances effectiveness and training efficiency for procedural training inside the operating theater without compromising operating theater time efficiency.

  9. Time versus energy minimization migration strategy varies with body size and season in long-distance migratory shorebirds.

    PubMed

    Zhao, Meijuan; Christie, Maureen; Coleman, Jonathan; Hassell, Chris; Gosbell, Ken; Lisovski, Simeon; Minton, Clive; Klaassen, Marcel

    2017-01-01

    Migrants have been hypothesised to use different migration strategies between seasons: a time-minimization strategy during their pre-breeding migration towards the breeding grounds and an energy-minimization strategy during their post-breeding migration towards the wintering grounds. Besides season, we propose body size as a key factor in shaping migratory behaviour. Specifically, given that body size is expected to correlate negatively with maximum migration speed and that large birds tend to use more time to complete their annual life-history events (such as moult, breeding and migration), we hypothesise that large-sized species are time stressed all year round. Consequently, large birds are not only likely to adopt a time-minimization strategy during pre-breeding migration, but also during post-breeding migration, to guarantee a timely arrival at both the non-breeding (i.e. wintering) and breeding grounds. We tested this idea using individual tracks across six long-distance migratory shorebird species (family Scolopacidae) along the East Asian-Australasian Flyway varying in size from 50 g to 750 g lean body mass. Migration performance was compared between pre- and post-breeding migration using four quantifiable migratory behaviours that serve to distinguish between a time- and energy-minimization strategy, including migration speed, number of staging sites, total migration distance and step length from one site to the next. During pre- and post-breeding migration, the shorebirds generally covered similar distances, but they tended to migrate faster, used fewer staging sites, and tended to use longer step lengths during pre-breeding migration. These seasonal differences are consistent with the prediction that a time-minimization strategy is used during pre-breeding migration, whereas an energy-minimization strategy is used during post-breeding migration. However, there was also a tendency for the seasonal difference in migration speed to progressively disappear with an increase in body size, supporting our hypothesis that larger species tend to use time-minimization strategies during both pre- and post-breeding migration. Our study highlights that body size plays an important role in shaping migratory behaviour. Larger migratory bird species are potentially time constrained during not only the pre- but also the post-breeding migration. Conservation of their habitats during both seasons may thus be crucial for averting further population declines.

  10. Sulfide semiconductor materials prepared by high-speed electrodeposition and discussion of electrochemical reaction mechanism

    NASA Astrophysics Data System (ADS)

    Okamoto, Naoki; Kataoka, Kentaro; Saito, Takeyasu

    2017-07-01

    A manufacturing method for SnS using a one-step electrochemical technique was developed. The sulfide semiconductor was formed by electrodeposition using an aqueous bath at low temperatures. The sulfide semiconductor particles produced were characterized by X-ray diffraction (XRD) and scanning electron microscopy (SEM). The highest current density at which SnS was formed was 1800 mA/cm2 at a bath temperature of 293 K, which is 36 times larger than that in a previous deposition process. Analysis of the chronoamperometric current-time transients indicated that in the potential range from -1100 to -2000 mV vs saturated calomel electrode (SCE), the electrodeposition of SnS can be explained by an instantaneous nucleation model.

  11. Dynamic Emotional Processing in Experiential Therapy: Two Steps Forward, One Step Back

    ERIC Educational Resources Information Center

    Pascual-Leone, Antonio

    2009-01-01

    The study of dynamic and nonlinear change has been a valuable development in psychotherapy process research. However, little advancement has been made in describing how moment-by-moment affective processes contribute to larger units of change. The purpose of this study was to examine observable moment-by-moment sequences in emotional processing as…

  12. Prospective Optimization with Limited Resources

    PubMed Central

    Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei

    2015-01-01

    The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their “depth of computation”) and how often they attempted to incorporate new information about the future rewards (their “recalculation period”). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation. PMID:26367309

  13. Prospective Optimization with Limited Resources.

    PubMed

    Snider, Joseph; Lee, Dongpyo; Poizner, Howard; Gepshtein, Sergei

    2015-09-01

    The future is uncertain because some forthcoming events are unpredictable and also because our ability to foresee the myriad consequences of our own actions is limited. Here we studied how humans select actions under such extrinsic and intrinsic uncertainty, in view of an exponentially expanding number of prospects on a branching multivalued visual stimulus. A triangular grid of disks of different sizes scrolled down a touchscreen at a variable speed. The larger disks represented larger rewards. The task was to maximize the cumulative reward by touching one disk at a time in a rapid sequence, forming an upward path across the grid, while every step along the path constrained the part of the grid accessible in the future. This task captured some of the complexity of natural behavior in the risky and dynamic world, where ongoing decisions alter the landscape of future rewards. By comparing human behavior with behavior of ideal actors, we identified the strategies used by humans in terms of how far into the future they looked (their "depth of computation") and how often they attempted to incorporate new information about the future rewards (their "recalculation period"). We found that, for a given task difficulty, humans traded off their depth of computation for the recalculation period. The form of this tradeoff was consistent with a complete, brute-force exploration of all possible paths up to a resource-limited finite depth. A step-by-step analysis of the human behavior revealed that participants took into account very fine distinctions between the future rewards and that they abstained from some simple heuristics in assessment of the alternative paths, such as seeking only the largest disks or avoiding the smaller disks. The participants preferred to reduce their depth of computation or increase the recalculation period rather than sacrifice the precision of computation.

  14. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Robin Ivey; Balestra, Paolo; Strydom, Gerhard

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it usingmore » the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings are very model-specific and cannot be generalized to other PHISICS/RELAP5-3D models.« less

  15. Multi-step process for concentrating magnetic particles in waste sludges

    DOEpatents

    Watson, John L.

    1990-01-01

    This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed.

  16. Multi-step process for concentrating magnetic particles in waste sludges

    DOEpatents

    Watson, J.L.

    1990-07-10

    This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed. 7 figs.

  17. Comparison of crossover and jab step start techniques for base stealing in baseball.

    PubMed

    Miyanishi, Tomohisa; Endo, So; Nagahara, Ryu

    2017-11-01

    Base stealing is an important tactic for increasing the chance of scoring in baseball. This study aimed to compare the crossover step (CS) and jab step (JS) starts for base stealing start performance and to clarify the differences between CS and JS starts in terms of three-dimensional lower extremity joint kinetics. Twelve male baseball players performed CS and JS starts, during which their motion and the force they applied to the ground were simultaneously recorded using a motion-capture system and two force platforms. The results showed that the normalised average forward external power, the average forward-backward force exerted by the left leg, and the forward velocities of the whole body centre of gravity generated by both legs and the left leg were significantly higher for the JS start than for the CS start. Moreover, the positive work done by hip extension during the left leg push-off was two-times greater for the JS start than the CS start. In conclusion, this study has demonstrated that the jab step start may be the better technique for a base stealing start and that greater positive work produced by left hip extension is probably responsible for producing its larger forward ground reaction force.

  18. 16 CFR 642.3 - Prescreen opt-out notice.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...

  19. 16 CFR 642.3 - Prescreen opt-out notice.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...

  20. 16 CFR 642.3 - Prescreen opt-out notice.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... size that is larger than the type size of the principal text on the same page, but in no event smaller than 12-point type, or if provided by electronic means, then reasonable steps shall be taken to ensure that the type size is larger than the type size of the principal text on the same page; (ii) On the...

  1. Effects of Socket Size on Metrics of Socket Fit in Trans-Tibial Prosthesis Users

    PubMed Central

    Sanders, Joan E; Youngblood, Robert T; Hafner, Brian J; Cagle, John C; McLean, Jake B; Redd, Christian B; Dietrich, Colin R; Ciol, Marcia A; Allyn, Katheryn J

    2017-01-01

    The purpose of this research was to conduct a preliminary effort to identify quantitative metrics to distinguish a good socket from an oversized socket in people with trans-tibial amputation. Results could be used to inform clinical practices related to socket replacement. A cross-over study was conducted on community ambulators (K-level 3 or 4) with good residual limb sensation. Participants were each provided with two sockets, a duplicate of their as-prescribed socket and a modified socket that was enlarged or reduced by 1.8 mm (~6% of the socket volume) based on the fit quality of the as-prescribed socket. The two sockets were termed a larger socket and a smaller socket. Activity was monitored while participants wore each socket for 4wk. Participants’ gait; self-reported satisfaction, quality of fit, and performance; socket comfort; and morning-to-afternoon limb fluid volume changes were assessed. Visual analysis of plots and estimated effect sizes (measure as mean difference divided by standard deviation) showed largest effects for step time asymmetry, step width asymmetry, anterior and anterior-distal morning-to-afternoon fluid volume change, socket comfort scores, and self-reported measures of utility, satisfaction, and residual limb health. These variables may be viable metrics for early detection of deterioration in socket fit, and should be tested in a larger clinical study. PMID:28373013

  2. Effects of socket size on metrics of socket fit in trans-tibial prosthesis users.

    PubMed

    Sanders, Joan E; Youngblood, Robert T; Hafner, Brian J; Cagle, John C; McLean, Jake B; Redd, Christian B; Dietrich, Colin R; Ciol, Marcia A; Allyn, Katheryn J

    2017-06-01

    The purpose of this research was to conduct a preliminary effort to identify quantitative metrics to distinguish a good socket from an oversized socket in people with trans-tibial amputation. Results could be used to inform clinical practices related to socket replacement. A cross-over study was conducted on community ambulators (K-level 3 or 4) with good residual limb sensation. Participants were each provided with two sockets, a duplicate of their as-prescribed socket and a modified socket that was enlarged or reduced by 1.8mm (∼6% of the socket volume) based on the fit quality of the as-prescribed socket. The two sockets were termed a larger socket and a smaller socket. Activity was monitored while participants wore each socket for 4 weeks. Participants' gait; self-reported satisfaction, quality of fit, and performance; socket comfort; and morning-to-afternoon limb fluid volume changes were assessed. Visual analysis of plots and estimated effect sizes (measured as mean difference divided by standard deviation) showed largest effects for step time asymmetry, step width asymmetry, anterior and anterior-distal morning-to-afternoon fluid volume change, socket comfort score, and self-reported utility. These variables may be viable metrics for early detection of deterioration in socket fit, and should be tested in a larger clinical study. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Sensitivity of finite helical axis parameters to temporally varying realistic motion utilizing an idealized knee model.

    PubMed

    Johnson, T S; Andriacchi, T P; Erdman, A G

    2004-01-01

    Various uses of the screw or helical axis have previously been reported in the literature in an attempt to quantify the complex displacements and coupled rotations of in vivo human knee kinematics. Multiple methods have been used by previous authors to calculate the axis parameters, and it has been theorized that the mathematical stability and accuracy of the finite helical axis (FHA) is highly dependent on experimental variability and rotation increment spacing between axis calculations. Previous research has not addressed the sensitivity of the FHA for true in vivo data collection, as required for gait laboratory analysis. This research presents a controlled series of experiments simulating continuous data collection as utilized in gait analysis to investigate the sensitivity of the three-dimensional finite screw axis parameters of rotation, displacement, orientation and location with regard to time step increment spacing, utilizing two different methods for spatial location. Six-degree-of-freedom motion parameters are measured for an idealized rigid body knee model that is constrained to a planar motion profile for the purposes of error analysis. The kinematic data are collected using a multicamera optoelectronic system combined with an error minimization algorithm known as the point cluster method. Rotation about the screw axis is seen to be repeatable, accurate and time step increment insensitive. Displacement along the axis is highly dependent on time step increment sizing, with smaller rotation angles between calculations producing more accuracy. Orientation of the axis in space is accurate with only a slight filtering effect noticed during motion reversal. Locating the screw axis by a projected point onto the screw axis from the mid-point of the finite displacement is found to be less sensitive to motion reversal than finding the intersection of the axis with a reference plane. A filtering effect of the spatial location parameters was noted for larger time step increments during periods of little or no rotation.

  4. 27. VIEW FROM AFT OF MAIN HOISTING ENGINE WITH HOISTING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    27. VIEW FROM AFT OF MAIN HOISTING ENGINE WITH HOISTING DRUM IN FOREGROUND. NOTE MAIN HOISTING DRUM IS A STEP DRUM, WITH TWO DIAMETERS ON DRUM. WHEN BUCKET IS IN WATER THE CABLE IS ON THE SMALLER STEP, AS PICTURED, GIVING MORE POWER TO THE LINE. THE CABLE STEPS TO LARGER DIAMETER WHEN BUCKET IS OUT OF WATER, WHERE SPEED IS MORE IMPORTANT THAN POWER. SMALLER BACKING DRUM IN BACKGROUND. - Dredge CINCINNATI, Docked on Ohio River at foot of Lighthill Street, Pittsburgh, Allegheny County, PA

  5. Simulation of linear mechanical systems

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.

    1993-01-01

    A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.

  6. Representation of Nucleation Mode Microphysics in a Global Aerosol Model with Sectional Microphysics

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Pierce, J. R.; Adams, P. J.

    2013-01-01

    In models, nucleation mode (1 nm

  7. Validation of thigh-based accelerometer estimates of postural allocation in 5-12 year-olds.

    PubMed

    van Loo, Christiana M T; Okely, Anthony D; Batterham, Marijka J; Hinkley, Trina; Ekelund, Ulf; Brage, Søren; Reilly, John J; Jones, Rachel A; Janssen, Xanne; Cliff, Dylan P

    2017-03-01

    To validate activPAL3™ (AP3) for classifying postural allocation, estimating time spent in postures and examining the number of breaks in sedentary behaviour (SB) in 5-12 year-olds. Laboratory-based validation study. Fifty-seven children completed 15 sedentary, light- and moderate-to-vigorous intensity activities. Direct observation (DO) was used as the criterion measure. The accuracy of AP3 was examined using a confusion matrix, equivalence testing, Bland-Altman procedures and a paired t-test for 5-8y and 9-12y. Sensitivity of AP3 was 86.8%, 82.5% and 85.3% for sitting/lying, standing, and stepping, respectively, in 5-8y and 95.3%, 81.5% and 85.1%, respectively, in 9-12y. Time estimates of AP3 were equivalent to DO for sitting/lying in 9-12y and stepping in all ages, but not for sitting/lying in 5-12y and standing in all ages. Underestimation of sitting/lying time was smaller in 9-12y (1.4%, limits of agreement [LoA]: -13.8 to 11.1%) compared to 5-8y (12.6%, LoA: -39.8 to 14.7%). Underestimation for stepping time was small (5-8y: 6.5%, LoA: -18.3 to 5.3%; 9-12y: 7.6%, LoA: -16.8 to 1.6%). Considerable overestimation was found for standing (5-8y: 36.8%, LoA: -16.3 to 89.8%; 9-12y: 19.3%, LoA: -1.6 to 36.9%). SB breaks were significantly overestimated (5-8y: 53.2%, 9-12y: 28.3%, p<0.001). AP3 showed acceptable accuracy for classifying postures, however estimates of time spent standing were consistently overestimated and individual error was considerable. Estimates of sitting/lying were more accurate for 9-12y. Stepping time was accurately estimated for all ages. SB breaks were significantly overestimated, although the absolute difference was larger in 5-8y. Surveillance applications of AP3 would be acceptable, however, individual level applications might be less accurate. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  8. An Examination of Seismicity Linking the Solomon Islands and Vanuatu Subduction Zones

    NASA Astrophysics Data System (ADS)

    Neely, J. S.; Furlong, K. P.

    2015-12-01

    The Solomon Islands-Vanuatu composite subduction zone represents a tectonically complex region along the Pacific-Australia plate boundary in the southwest Pacific Ocean. Here the Australia plate subducts under the Pacific plate in two segments: the South Solomon Trench and the Vanuatu Trench. The two subducting sections are offset by a 200 km long, transform fault - the San Cristobal Trough (SCT) - which acts as a Subduction-Transform Edge Propagator (STEP) fault. The subducting segments have experienced much more frequent and larger seismic events than the STEP fault. The northern Vanuatu trench hosted a M8.0 earthquake in 2013. In 2014, at the juncture of the western terminus of the SCT and the southern South Solomon Trench, two earthquakes (M7.4 and M7.6) occurred with disparate mechanisms (dominantly thrust and strike-slip respectively), which we interpret to indicate the tearing of the Australia plate as its northern section subducts and southern section translates along the SCT. During the 2013-2014 timeframe, little seismic activity occurred along the STEP fault. However, in May 2015, three M6.8-6.9 strike-slip events occurred in rapid succession as the STEP fault ruptured east to west. These recent events share similarities with a 1993 strike-slip STEP sequence on the SCT. Analysis of the 1993 and 2015 STEP earthquake sequences provides constraints on the plate boundary geometry of this major transform fault. Preliminary research suggests that plate motion along the STEP fault is partitioned between larger east-west oriented strike-slip events and smaller north-south thrust earthquakes. Additionally, the differences in seismic activity between the subducting slabs and the STEP fault can provide insights into how stress is transferred along the plate boundary and the mechanisms by which that stress is released.

  9. Solving the Coupled System Improves Computational Efficiency of the Bidomain Equations

    PubMed Central

    Southern, James A.; Plank, Gernot; Vigmond, Edward J.; Whiteley, Jonathan P.

    2017-01-01

    The bidomain equations are frequently used to model the propagation of cardiac action potentials across cardiac tissue. At the whole organ level the size of the computational mesh required makes their solution a significant computational challenge. As the accuracy of the numerical solution cannot be compromised, efficiency of the solution technique is important to ensure that the results of the simulation can be obtained in a reasonable time whilst still encapsulating the complexities of the system. In an attempt to increase efficiency of the solver, the bidomain equations are often decoupled into one parabolic equation that is computationally very cheap to solve and an elliptic equation that is much more expensive to solve. In this study the performance of this uncoupled solution method is compared with an alternative strategy in which the bidomain equations are solved as a coupled system. This seems counter-intuitive as the alternative method requires the solution of a much larger linear system at each time step. However, in tests on two 3-D rabbit ventricle benchmarks it is shown that the coupled method is up to 80% faster than the conventional uncoupled method — and that parallel performance is better for the larger coupled problem. PMID:19457741

  10. Modeling residence-time distribution in horizontal screw hydrolysis reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sievers, David A.; Stickel, Jonathan J.

    The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less

  11. Modeling residence-time distribution in horizontal screw hydrolysis reactors

    DOE PAGES

    Sievers, David A.; Stickel, Jonathan J.

    2017-10-12

    The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less

  12. Gibbs-Thomson Law for Singular Step Segments: Thermodynamics Versus Kinetics

    NASA Technical Reports Server (NTRS)

    Chernov, A. A.

    2003-01-01

    Classical Burton-Cabrera-Frank theory presumes that thermal fluctuations are so fast that at any time density of kinks on a step is comparable with the reciprocal intermolecular distance, so that the step rate is about isotropic within the crystal plane. Such azimuthal isotropy is, however, often not the case: Kink density may be much lower. In particular, it was recently found on the (010) face of orthorhombic lysozyme that interkink distance may exceed 500-600 intermolecular distances. Under such conditions, Gibbs-Thomson law (GTL) may not be applicable: On a straight step segment between two corners, communication between the comers occurs exclusively by kink exchange. Annihilation between kinks of opposite sign generated at the comers results in the grain in step energy entering GTL. If the step segment length l much greater than D/v, where D and v are the kink diffusivity and propagation rate, respectively, the opposite kinks have practically no chance to annihilate and GTL is not applicable. The opposite condition of the GTL applicability, l much less than D/v, is equivalent to the requirement that relative supersaturation Delta(sub mu)/kT much less than alpha/l, where alpha is molecular size. Thus, GTL may be applied to a segment of 10(exp 3)alpha approx. 3 x 10(exp -5)cm approx 0.3 micron only if supersaturation is less than 0.1%, while practically used driving forces for crystallization are much larger. Relationships alternative to the GTL for different, but low, kink density have been discussed. They confirm experimental evidences that the Burton-Cabrera-Frank theory of spiral growth is growth rates twice as low as compared to the observed figures. Also, application of GTL results in unrealistic step energy while suggested kinetic law give reasonable figures.

  13. Dancing Lights: Creating the Aurora Story

    NASA Astrophysics Data System (ADS)

    Wood, E. L.; Cobabe-Ammann, E. A.

    2009-12-01

    Science tells a story about our world, our existence, our history, and the larger environment our planet occupies. Bearing this in mind, we created a series of lessons for 3rd-5th grades using a cross-disciplinary approach to teaching about the aurora by incorporating stories, photos, movies, and geography into the science in order to paint a broad picture and answer the question, “why do we care?” The fundamental backbone of the program is literacy. Students write and illustrate fiction and non-fiction work, poetry, and brochures that solidify both language arts skills and science content. In a time when elementary teachers relegate science to less than one hour per week, we have developed a novel science program that can be easily integrated with other topics during the typical school day to increase the amount of science taught in a school year. We are inspiring students to take an interest in the natural world with this program, a stepping-stone for larger things.

  14. Delayed Learning Effects with Erroneous Examples: A Study of Learning Decimals with a Web-Based Tutor

    ERIC Educational Resources Information Center

    McLaren, Bruce M.; Adams, Deanne M.; Mayer, Richard E.

    2015-01-01

    Erroneous examples--step-by-step problem solutions with one or more errors for students to find and fix--hold great potential to help students learn. In this study, which is a replication of a prior study (Adams et al. 2014), but with a much larger population (390 vs. 208), middle school students learned about decimals either by working with…

  15. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  16. Reduction of Simulation Times for High-Q Structures using the Resonance Equation

    DOE PAGES

    Hall, Thomas Wesley; Bandaru, Prabhakar R.; Rees, Daniel Earl

    2015-11-17

    Simulating steady state performance of high quality factor (Q) resonant RF structures is computationally difficult for structures with sizes on the order of more than a few wavelengths because of the long times (on the order of ~ 0.1 ms) required to achieve steady state in comparison with maximum time step that can be used in the simulation (typically, on the order of ~ 1 ps). This paper presents analytical and computational approaches that can be used to accelerate the simulation of the steady state performance of such structures. The basis of the proposed approach is the utilization of amore » larger amplitude signal at the beginning to achieve steady state earlier relative to the nominal input signal. Finally, the methodology for finding the necessary input signal is then discussed in detail, and the validity of the approach is evaluated.« less

  17. A comparative study of one-step and two-step approaches for MAPbI3 perovskite layer and its influence on the performance of mesoscopic perovskite solar cell

    NASA Astrophysics Data System (ADS)

    Wang, Minhuan; Feng, Yulin; Bian, Jiming; Liu, Hongzhu; Shi, Yantao

    2018-01-01

    The mesoscopic perovskite solar cells (M-PSCs) were synthesized with MAPbI3 perovskite layers as light harvesters, which were grown with one-step and two-step solution process, respectively. A comparative study was performed through the quantitative correlation of resulting device performance and the crystalline quality of perovskite layers. Comparing with the one-step counterpart, a pronounced improvement in the steady-state power conversion efficiencies (PCEs) by 56.86% was achieved with two-step process, which was mainly resulted from the significant enhancement in fill factor (FF) from 48% to 77% without sacrificing the open circuit voltage (Voc) and short circuit current (Jsc). The enhanced FF was attributed to the reduced non-radiative recombination channels due to the better crystalline quality and larger grain size with the two-step processed perovskite layer. Moreover, the superiority of two-step over one-step process was demonstrated with rather good reproducibility.

  18. Building high-quality assay libraries for targeted analysis of SWATH MS data.

    PubMed

    Schubert, Olga T; Gillet, Ludovic C; Collins, Ben C; Navarro, Pedro; Rosenberger, George; Wolski, Witold E; Lam, Henry; Amodei, Dario; Mallick, Parag; MacLean, Brendan; Aebersold, Ruedi

    2015-03-01

    Targeted proteomics by selected/multiple reaction monitoring (S/MRM) or, on a larger scale, by SWATH (sequential window acquisition of all theoretical spectra) MS (mass spectrometry) typically relies on spectral reference libraries for peptide identification. Quality and coverage of these libraries are therefore of crucial importance for the performance of the methods. Here we present a detailed protocol that has been successfully used to build high-quality, extensive reference libraries supporting targeted proteomics by SWATH MS. We describe each step of the process, including data acquisition by discovery proteomics, assertion of peptide-spectrum matches (PSMs), generation of consensus spectra and compilation of MS coordinates that uniquely define each targeted peptide. Crucial steps such as false discovery rate (FDR) control, retention time normalization and handling of post-translationally modified peptides are detailed. Finally, we show how to use the library to extract SWATH data with the open-source software Skyline. The protocol takes 2-3 d to complete, depending on the extent of the library and the computational resources available.

  19. Full-Process Computer Model of Magnetron Sputter, Part I: Test Existing State-of-Art Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walton, C C; Gilmer, G H; Wemhoff, A P

    2007-09-26

    This work is part of a larger project to develop a modeling capability for magnetron sputter deposition. The process is divided into four steps: plasma transport, target sputter, neutral gas and sputtered atom transport, and film growth, shown schematically in Fig. 1. Each of these is simulated separately in this Part 1 of the project, which is jointly funded between CMLS and Engineering. The Engineering portion is the plasma modeling, in step 1. The plasma modeling was performed using the Object-Oriented Particle-In-Cell code (OOPIC) from UC Berkeley [1]. Figure 2 shows the electron density in the simulated region, using magneticmore » field strength input from experiments by Bohlmark [2], where a scale of 1% is used. Figures 3 and 4 depict the magnetic field components that were generated using two-dimensional linear interpolation of Bohlmark's experimental data. The goal of the overall modeling tool is to understand, and later predict, relationships between parameters of film deposition we can change (such as gas pressure, gun voltage, and target-substrate distance) and key properties of the results (such as film stress, density, and stoichiometry.) The simulation must use existing codes, either open-source or low-cost, not develop new codes. In part 1 (FY07) we identified and tested the best available code for each process step, then determined if it can cover the size and time scales we need in reasonable computation times. We also had to determine if the process steps are sufficiently decoupled that they can be treated separately, and identify any research-level issues preventing practical use of these codes. Part 2 will consider whether the codes can be (or need to be) made to talk to each other and integrated into a whole.« less

  20. Application of an Evolution Strategy in Planetary Ephemeris Optimization

    NASA Astrophysics Data System (ADS)

    Mai, E.

    2016-12-01

    Classical planetary ephemeris construction comprises three major steps, which are performed iteratively: simultaneous numerical integration of coupled equations of motion of a multi-body system (propagator step), reduction of thousands of observations (reduction step), and optimization of various selected model parameters (adjustment step). This traditional approach is challenged by ongoing refinements in force modeling, e.g. inclusion of much more significant minor bodies, an ever-growing number of planetary observations, e.g. vast amount of spacecraft tracking data, etc. To master the high computational burden and in order to circumvent the need for inversion of huge normal equation matrices, we propose an alternative ephemeris construction method. The main idea is to solve the overall optimization problem by a straightforward direct evaluation of the whole set of mathematical formulas involved, rather than to solve it as an inverse problem with all its tacit mathematical assumptions and numerical difficulties. We replace the usual gradient search by a stochastic search, namely an evolution strategy, the latter of which is also perfect for the exploitation of parallel computing capabilities. Furthermore, this new approach enables multi-criteria optimization and time-varying optima. This issue will become important in future once ephemeris construction is just one part of even larger optimization problems, e.g. the combined and consistent determination of the physical state (orbit, size, shape, rotation, gravity,…) of celestial bodies (planets, satellites, asteroids, or comets), and if one seeks near real-time solutions. Here we outline the general idea and discuss first results. As an example, we present a simultaneous optimization of high-correlated asteroidal ring model parameters (total mass and heliocentric radius), based on simulations.

  1. Macromolecular Crowding Modulates Actomyosin Kinetics.

    PubMed

    Ge, Jinghua; Bouriyaphone, Sherry D; Serebrennikova, Tamara A; Astashkin, Andrei V; Nesmelov, Yuri E

    2016-07-12

    Actomyosin kinetics is usually studied in dilute solutions, which do not reflect conditions in the cytoplasm. In cells, myosin and actin work in a dense macromolecular environment. High concentrations of macromolecules dramatically reduce the amount of free space available for all solutes, which results in an effective increase of the solutes' chemical potential and protein stabilization. Moreover, in a crowded solution, the chemical potential depends on the size of the solute, with larger molecules experiencing a larger excluded volume than smaller ones. Therefore, since myosin interacts with two ligands of different sizes (actin and ATP), macromolecular crowding can modulate the kinetics of individual steps of the actomyosin ATPase cycle. To emulate the effect of crowding in cells, we studied actomyosin cycle reactions in the presence of a high-molecular-weight polymer, Ficoll70. We observed an increase in the maximum velocity of the actomyosin ATPase cycle, and our transient-kinetics experiments showed that virtually all individual steps of the actomyosin cycle were affected by the addition of Ficoll70. The observed effects of macromolecular crowding on the myosin-ligand interaction cannot be explained by the increase of a solute's chemical potential. A time-resolved Förster resonance energy transfer experiment confirmed that the myosin head assumes a more compact conformation in the presence of Ficoll70 than in a dilute solution. We conclude that the crowding-induced myosin conformational change plays a major role in the changed kinetics of actomyosin ATPase. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Procedures for the GMP-Compliant Production and Quality Control of [18F]PSMA-1007: A Next Generation Radiofluorinated Tracer for the Detection of Prostate Cancer

    PubMed Central

    Cardinale, Jens; Martin, René; Remde, Yvonne; Schäfer, Martin; Hienzsch, Antje; Hübner, Sandra; Zerges, Anna-Maria; Marx, Heike; Hesse, Ronny; Weber, Klaus; Smits, Rene; Hoepping, Alexander; Müller, Marco; Neels, Oliver C.; Kopka, Klaus

    2017-01-01

    Radiolabeled tracers targeting the prostate-specific membrane antigen (PSMA) have become important radiopharmaceuticals for the PET-imaging of prostate cancer. In this connection, we recently developed the fluorine-18-labelled PSMA-ligand [18F]PSMA-1007 as the next generation radiofluorinated Glu-ureido PSMA inhibitor after [18F]DCFPyL and [18F]DCFBC. Since radiosynthesis so far has been suffering from rather poor yields, novel procedures for the automated radiosyntheses of [18F]PSMA-1007 have been developed. We herein report on both the two-step and the novel one-step procedures, which have been performed on different commonly-used radiosynthesisers. Using the novel one-step procedure, the [18F]PSMA-1007 was produced in good radiochemical yields ranging from 25 to 80% and synthesis times of less than 55 min. Furthermore, upscaling to product activities up to 50 GBq per batch was successfully conducted. All batches passed quality control according to European Pharmacopoeia standards. Therefore, we were able to disclose a new, simple and, at the same time, high yielding production pathway for the next generation PSMA radioligand [18F]PSMA-1007. Actually, it turned out that the radiosynthesis is as easily realised as the well-known [18F]FDG synthesis and, thus, transferable to all currently-available radiosynthesisers. Using the new procedures, the clinical daily routine can be sustainably supported in-house even in larger hospitals by a single production batch. PMID:28953234

  3. Local anesthesia for inguinal hernia repair step-by-step procedure.

    PubMed Central

    Amid, P K; Shulman, A G; Lichtenstein, I L

    1994-01-01

    OBJECTIVE. The authors introduce a simple six-step infiltration technique that results in satisfactory local anesthesia and prolonged postoperative analgesia, requiring a maximum of 30 to 40 mL of local anesthetic solution. SUMMARY BACKGROUND DATA. For the last 20 years, more than 12,000 groin hernia repairs have been performed under local anesthesia at the Lichtenstein Hernia Institute. Initially, field block was the mean of achieving local anesthesia. During the last 5 years, a simple infiltration technique has been used because the field block was more time consuming and required larger volume of the local anesthetic solution. Furthermore, because of the blind nature of the procedure, it did not always result in satisfactory anesthesia and, at times, accidental needle puncture of the ilioinguinal nerve resulted in prolonged postoperative pain, burning, or electric shock sensation within the field of the ilioinguinal nerve innervation. METHODS. More than 12,000 patients underwent operations in a private practice setting in general hospitals. RESULTS. For 2 decades, more than 12,000 adult patients with reducible groin hernias satisfactorily underwent operations under local anesthesia without complications. CONCLUSIONS. The preferred choice of anesthesia for all reducible adult inguinal hernia repair is local. It is safe, simple, effective, and economical, without postanesthesia side effects. Furthermore, local anesthesia administered before the incision produces longer postoperative analgesia because local infiltration, theoretically, inhibits build-up of local nociceptive molecules and, therefore, there is better pain control in the postoperative period. Images Figure 1. Figure 2. PMID:7986138

  4. A biomechanical analysis of common lunge tasks in badminton.

    PubMed

    Kuntze, Gregor; Mansfield, Neil; Sellers, William

    2010-01-01

    The lunge is regularly used in badminton and is recognized for the high physical demands it places on the lower limbs. Despite its common occurrence, little information is available on the biomechanics of lunging in the singles game. A video-based pilot study confirmed the relatively high frequency of lunging, approximately 15% of all movements, in competitive singles games. The biomechanics and performance characteristics of three badminton-specific lunge tasks (kick, step-in, and hop lunge) were investigated in the laboratory with nine experienced male badminton players. Ground reaction forces and kinematic data were collected and lower limb joint kinetics calculated using an inverse dynamics approach. The step-in lunge was characterized by significantly lower mean horizontal reaction force at drive-off and lower mean peak hip joint power than the kick lunge. The hop lunge resulted in significantly larger mean reaction forces during loading and drive-off phases, as well as significantly larger mean peak ankle joint moments and knee and ankle joint powers than the kick or step-in lunges. These findings indicate that, within the setting of this investigation, the step-in lunge may be beneficial for reducing the muscular demands of lunge recovery and that the hop lunge allows for higher positive power output, thereby presenting an efficient lunging method.

  5. Step styles of pedestrians at different densities

    NASA Astrophysics Data System (ADS)

    Wang, Jiayue; Weng, Wenguo; Boltes, Maik; Zhang, Jun; Tordeux, Antoine; Ziemer, Verena

    2018-02-01

    Stepping locomotion is the basis of human movement. The investigation of stepping locomotion and its affecting factors is necessary for a more realistic knowledge of human movement, which is usually referred to as walking with equal step lengths for the right and left leg. To study pedestrians’ stepping locomotion, a set of single-file movement experiments involving 39 participants of the same age walking on a highly curved oval course is conducted. The microscopic characteristics of the pedestrians including 1D Voronoi density, speed, and step length are calculated based on a projected coordinate. The influence of the projection lines with different radii on the measurement of these quantities is investigated. The step lengths from the straight and curved parts are compared using the Kolmogorov-Smirnov test. During the experiments, six different step styles are observed and the proportions of different step styles change with the density. At low density, the main step style is the stable-large step style and the step lengths of one pedestrian are almost constant. At high density, some pedestrians adjust and decrease their step lengths. Some pedestrians take relatively smaller and larger steps alternately to adapt to limited space.

  6. Research for diagnosing electronic control fault of astronomical telescope's armature winding by step signal

    NASA Astrophysics Data System (ADS)

    Zhang, Yulong; Yang, Shihai; Gu, Bozhong

    2016-10-01

    This paper puts forward a electronic fault diagnose method focusing on large-diameter astronomical telescope's armature winding, and ascertains if it is the resistance or inductance which is out of order. When it comes to armature winding's electronic fault, give the angular position a step signal, and compare the outputs of five models of normal, larger-resistance, smaller-resistance, larger-inductance and smaller-inductance, so we can position the fault. Firstly, we ascertain the transfer function of the angular position to the armature voltage, to analysis the output of armature voltage when the angular position's input is step signal. Secondly, ascertain the different armature currents' characteristics after armature voltage pass through different armature models. Finally, basing on the characteristics, we design two strategies of resistance and inductance separately. The author use MATLAB/Simulink function to model and emulate with the hardware parameters of the 2.5m-caliber telescope, which China and France developed cooperatively for Russia. Meanwhile, the author add a white noise disturbance to the armature voltage, the result shows its feasibility under a certain sized disturbance.

  7. Age-related cognitive task effects on gait characteristics: do different working memory components make a difference?

    PubMed

    Qu, Xingda

    2014-10-27

    Though it is well recognized that gait characteristics are affected by concurrent cognitive tasks, how different working memory components contribute to dual task effects on gait is still unknown. The objective of the present study was to investigate dual-task effects on gait characteristics, specifically the application of cognitive tasks involving different working memory components. In addition, we also examined age-related differences in such dual-task effects. Three cognitive tasks (i.e. 'Random Digit Generation', 'Brooks' Spatial Memory', and 'Counting Backward') involving different working memory components were examined. Twelve young (6 males and 6 females, 20 ~ 25 years old) and 12 older participants (6 males and 6 females, 60 ~ 72 years old) took part in two phases of experiments. In the first phase, each cognitive task was defined at three difficulty levels, and perceived difficulty was compared across tasks. The cognitive tasks perceived to be equally difficult were selected for the second phase. In the second phase, four testing conditions were defined, corresponding to a baseline and the three equally difficult cognitive tasks. Participants walked on a treadmill at their self-selected comfortable speed in each testing condition. Body kinematics were collected during treadmill walking, and gait characteristics were assessed using spatial-temporal gait parameters. Application of the concurrent Brooks' Spatial Memory task led to longer step times compared to the baseline condition. Larger step width variability was observed in both the Brooks' Spatial Memory and Counting Backward dual-task conditions than in the baseline condition. In addition, cognitive task effects on step width variability differed between two age groups. In particular, the Brooks' Spatial Memory task led to significantly larger step width variability only among older adults. These findings revealed that cognitive tasks involving the visuo-spatial sketchpad interfered with gait more severely in older versus young adults. Thus, dual-task training, in which a cognitive task involving the visuo-spatial sketchpad (e.g. the Brooks' Spatial Memory task) is concurrently performed with walking, could be beneficial to mitigate impairments in gait among older adults.

  8. A comparative simulation study of AR(1) estimators in short time series.

    PubMed

    Krone, Tanja; Albers, Casper J; Timmerman, Marieke E

    2017-01-01

    Various estimators of the autoregressive model exist. We compare their performance in estimating the autocorrelation in short time series. In Study 1, under correct model specification, we compare the frequentist r 1 estimator, C-statistic, ordinary least squares estimator (OLS) and maximum likelihood estimator (MLE), and a Bayesian method, considering flat (B f ) and symmetrized reference (B sr ) priors. In a completely crossed experimental design we vary lengths of time series (i.e., T = 10, 25, 40, 50 and 100) and autocorrelation (from -0.90 to 0.90 with steps of 0.10). The results show a lowest bias for the B sr , and a lowest variability for r 1 . The power in different conditions is highest for B sr and OLS. For T = 10, the absolute performance of all measurements is poor, as expected. In Study 2, we study robustness of the methods through misspecification by generating the data according to an ARMA(1,1) model, but still analysing the data with an AR(1) model. We use the two methods with the lowest bias for this study, i.e., B sr and MLE. The bias gets larger when the non-modelled moving average parameter becomes larger. Both the variability and power show dependency on the non-modelled parameter. The differences between the two estimation methods are negligible for all measurements.

  9. Surface electric fields for North America during historical geomagnetic storms

    USGS Publications Warehouse

    Wei, Lisa H.; Homeier, Nichole; Gannon, Jennifer L.

    2013-01-01

    To better understand the impact of geomagnetic disturbances on the electric grid, we recreate surface electric fields from two historical geomagnetic storms—the 1989 “Quebec” storm and the 2003 “Halloween” storms. Using the Spherical Elementary Current Systems method, we interpolate sparsely distributed magnetometer data across North America. We find good agreement between the measured and interpolated data, with larger RMS deviations at higher latitudes corresponding to larger magnetic field variations. The interpolated magnetic field data are combined with surface impedances for 25 unique physiographic regions from the United States Geological Survey and literature to estimate the horizontal, orthogonal surface electric fields in 1 min time steps. The induced horizontal electric field strongly depends on the local surface impedance, resulting in surprisingly strong electric field amplitudes along the Atlantic and Gulf Coast. The relative peak electric field amplitude of each physiographic region, normalized to the value in the Interior Plains region, varies by a factor of 2 for different input magnetic field time series. The order of peak electric field amplitudes (largest to smallest), however, does not depend much on the input. These results suggest that regions at lower magnetic latitudes with high ground resistivities are also at risk from the effect of geomagnetically induced currents. The historical electric field time series are useful for estimating the flow of the induced currents through long transmission lines to study power flow and grid stability during geomagnetic disturbances.

  10. Physical inactivity post-stroke: a 3-year longitudinal study.

    PubMed

    Kunkel, Dorit; Fitton, Carolyn; Burnett, Malcolm; Ashburn, Ann

    2015-01-01

    To explore change in activity levels post-stroke. We measured activity levels using the activPAL™ in hospital and at 1, 2 and 3 years' post-stroke onset. Of the 74 participants (mean age 76 (SD 11), 39 men), 61 were assessed in hospital: 94% of time was spent in sitting/lying, 4% standing and 2% walking. Activity levels improved over time (complete cases n = 15); time spent sitting/lying decreased (p = 0.001); time spent standing, walking and number of steps increased (p = 0.001, p = 0.028 and p = 0.03, respectively). At year 3, 18% of time was spent in standing and 9% walking. Time spent upright correlated significantly with Barthel (r = 0.69 on admission, r = 0.68 on discharge, both p < 0.01) and functional ambulation category scores (r = 0.55 on admission, 0.63 on discharge, both p < 0.05); correlations remained significant at all assessment points. Depression (in hospital), left hemisphere infarction (Years 1-2), visual neglect (Year 2), poor mobility and balance (Years 1-3) correlated with poorer activity levels. People with stroke were inactive for the majority of time. Time spent upright improved significantly by 1 year post-stroke; improvements slowed down thereafter. Poor activity levels correlated with physical and psychological measures. Larger studies are indicated to identify predictors of activity levels. Implications for Rehabilitation Activity levels (measured using activPAL™ activity monitor), increased significantly by 1 year post-stroke but improvements slowed down at 2 and 3 years. People with stroke were inactive for the majority of their day in hospital and in the community. Poor activity levels correlated with physical and psychological measures. Larger studies are indicated to identify the most important predictors of activity levels.

  11. Estimating Physical Activity and Sedentary Behavior in a Free-Living Context: A Pragmatic Comparison of Consumer-Based Activity Trackers and ActiGraph Accelerometry.

    PubMed

    Gomersall, Sjaan R; Ng, Norman; Burton, Nicola W; Pavey, Toby G; Gilson, Nicholas D; Brown, Wendy J

    2016-09-07

    Activity trackers are increasingly popular with both consumers and researchers for monitoring activity and for promoting positive behavior change. However, there is a lack of research investigating the performance of these devices in free-living contexts, for which findings are likely to vary from studies conducted in well-controlled laboratory settings. The aim was to compare Fitbit One and Jawbone UP estimates of steps, moderate-to-vigorous physical activity (MVPA), and sedentary behavior with data from the ActiGraph GT3X+ accelerometer in a free-living context. Thirty-two participants were recruited using convenience sampling; 29 provided valid data for this study (female: 90%, 26/29; age: mean 39.6, SD 11.0 years). On two occasions for 7 days each, participants wore an ActiGraph GT3X+ accelerometer on their right hip and either a hip-worn Fitbit One (n=14) or wrist-worn Jawbone UP (n=15) activity tracker. Daily estimates of steps and very active minutes were derived from the Fitbit One (n=135 days) and steps, active time, and longest idle time from the Jawbone UP (n=154 days). Daily estimates of steps, MVPA, and longest sedentary bout were derived from the corresponding days of ActiGraph data. Correlation coefficients and Bland-Altman plots with examination of systematic bias were used to assess convergent validity and agreement between the devices and the ActiGraph. Cohen's kappa was used to assess the agreement between each device and the ActiGraph for classification of active versus inactive (≥10,000 steps per day and ≥30 min/day of MVPA) comparable with public health guidelines. Correlations with ActiGraph estimates of steps and MVPA ranged between .72 and .90 for Fitbit One and .56 and .75 for Jawbone UP. Compared with ActiGraph estimates, both devices overestimated daily steps by 8% (Fitbit One) and 14% (Jawbone UP). However, mean differences were larger for daily MVPA (Fitbit One: underestimated by 46%; Jawbone UP: overestimated by 50%). There was systematic bias across all outcomes for both devices. Correlations with ActiGraph data for longest idle time (Jawbone UP) ranged from .08 to .19. Agreement for classifying days as active or inactive using the ≥10,000 steps/day criterion was substantial (Fitbit One: κ=.68; Jawbone UP: κ=.52) and slight-fair using the criterion of ≥30 min/day of MVPA (Fitbit One: κ=.40; Jawbone UP: κ=.14). There was moderate-strong agreement between the ActiGraph and both Fitbit One and Jawbone UP for the estimation of daily steps. However, due to modest accuracy and systematic bias, they are better suited for consumer-based self-monitoring (eg, for the public consumer or in behavior change interventions) rather than to evaluate research outcomes. The outcomes that relate to health-enhancing MVPA (eg, "very active minutes" for Fitbit One or "active time" for Jawbone UP) and sedentary behavior ("idle time" for Jawbone UP) should be used with caution by consumers and researchers alike.

  12. Adaptive goal setting and financial incentives: a 2 × 2 factorial randomized controlled trial to increase adults' physical activity.

    PubMed

    Adams, Marc A; Hurley, Jane C; Todd, Michael; Bhuiyan, Nishat; Jarrett, Catherine L; Tucker, Wesley J; Hollingshead, Kevin E; Angadi, Siddhartha S

    2017-03-29

    Emerging interventions that rely on and harness variability in behavior to adapt to individual performance over time may outperform interventions that prescribe static goals (e.g., 10,000 steps/day). The purpose of this factorial trial was to compare adaptive vs. static goal setting and immediate vs. delayed, non-contingent financial rewards for increasing free-living physical activity (PA). A 4-month 2 × 2 factorial randomized controlled trial tested main effects for goal setting (adaptive vs. static goals) and rewards (immediate vs. delayed) and interactions between factors to increase steps/day as measured by a Fitbit Zip. Moderate-to-vigorous PA (MVPA) minutes/day was examined as a secondary outcome. Participants (N = 96) were mainly female (77%), aged 41 ± 9.5 years, and all were insufficiently active and overweight/obese (mean BMI = 34.1 ± 6.2). Participants across all groups increased by 2389 steps/day on average from baseline to intervention phase (p < .001). Participants receiving static goals showed a stronger increase in steps per day from baseline phase to intervention phase (2630 steps/day) than those receiving adaptive goals (2149 steps/day; difference = 482 steps/day, p = .095). Participants receiving immediate rewards showed stronger improvement (2762 step/day increase) from baseline to intervention phase than those receiving delayed rewards (2016 steps/day increase; difference = 746 steps/day, p = .009). However, the adaptive goals group showed a slower decrease in steps/day from the beginning of the intervention phase to the end of the intervention phase (i.e. less than half the rate) compared to the static goals group (-7.7 steps vs. -18.3 steps each day; difference = 10.7 steps/day, p < .001) resulting in better improvements for the adaptive goals group by study end. Rate of change over the intervention phase did not differ between reward groups. Significant goal phase x goal setting x reward interactions were observed. Adaptive goals outperformed static goals (i.e., 10,000 steps) over a 4-month period. Small immediate rewards outperformed larger, delayed rewards. Adaptive goals with either immediate or delayed rewards should be preferred for promoting PA. ClinicalTrials.gov ID: NCT02053259 registered prospectively on January 31, 2014.

  13. Setting up an Asbestos Operations and Maintenance (O&M) Program

    EPA Pesticide Factsheets

    Covers steps a buidling's O&M plan should including: appointing an asbestos program manager, inspecting the building, developing a plan, and if necessary selecting and implementing larger repair or abatement projects.

  14. Interventions to increase physical activity in middle-age women at the workplace: a randomized controlled trial.

    PubMed

    Ribeiro, Marcos Ausenka; Martins, Milton Arruda; Carvalho, Celso R F

    2014-01-01

    A four-group randomized controlled trial evaluated the impact of distinct workplace interventions to increase the physical activity (PA) and to reduce anthropometric parameters in middle-age women. One-hundred and ninety-five women age 40-50 yr who were employees from a university hospital and physically inactive at their leisure time were randomly assigned to one of four groups: minimal treatment comparator (MTC; n = 47), pedometer-based individual counseling (PedIC; n = 53), pedometer-based group counseling (PedGC; n = 48), and aerobic training (AT; n = 47). The outcomes were total number of steps (primary outcome), those performed at moderate intensity (≥ 110 steps per minute), and weight and waist circumference (secondary outcomes). Evaluations were performed at baseline, at the end of a 3-month intervention, and 3 months after that. Data were presented as delta [(after 3 months-baseline) or (after 6 months-baseline)] and 95% confidence interval. To detect the differences among the groups, a one-way ANOVA and a Holm-Sidak post hoc test was used (P < 0.05). The Cohen effect size was calculated, and an intention-to-treat approach was performed. Only groups using pedometers (PedIC and PedGC) increased the total number of steps after 3 months (P < 0.05); however, the increase observed in PedGC group (1475 steps per day) was even higher than that in PedIC (512 steps per day, P < 0.05) with larger effect size (1.4). The number of steps performed at moderate intensity also increased only in the PedGC group (845 steps per day, P < 0.05). No PA benefit was observed at 6 months. Women submitted to AT did not modify PA daily life activity but reduced anthropometric parameters after 3 and 6 months (P < 0.05). Our results show that in the workplace setting, pedometer-based PA intervention with counseling is effective increasing daily life number of steps, whereas AT is effective for weight loss.

  15. Reducing Bottlenecks to Improve the Efficiency of the Lung Cancer Care Delivery Process: A Process Engineering Modeling Approach to Patient-Centered Care.

    PubMed

    Ju, Feng; Lee, Hyo Kyung; Yu, Xinhua; Faris, Nicholas R; Rugless, Fedoria; Jiang, Shan; Li, Jingshan; Osarogiagbon, Raymond U

    2017-12-01

    The process of lung cancer care from initial lesion detection to treatment is complex, involving multiple steps, each introducing the potential for substantial delays. Identifying the steps with the greatest delays enables a focused effort to improve the timeliness of care-delivery, without sacrificing quality. We retrospectively reviewed clinical events from initial detection, through histologic diagnosis, radiologic and invasive staging, and medical clearance, to surgery for all patients who had an attempted resection of a suspected lung cancer in a community healthcare system. We used a computer process modeling approach to evaluate delays in care delivery, in order to identify potential 'bottlenecks' in waiting time, the reduction of which could produce greater care efficiency. We also conducted 'what-if' analyses to predict the relative impact of simulated changes in the care delivery process to determine the most efficient pathways to surgery. The waiting time between radiologic lesion detection and diagnostic biopsy, and the waiting time from radiologic staging to surgery were the two most critical bottlenecks impeding efficient care delivery (more than 3 times larger compared to reducing other waiting times). Additionally, instituting surgical consultation prior to cardiac consultation for medical clearance and decreasing the waiting time between CT scans and diagnostic biopsies, were potentially the most impactful measures to reduce care delays before surgery. Rigorous computer simulation modeling, using clinical data, can provide useful information to identify areas for improving the efficiency of care delivery by process engineering, for patients who receive surgery for lung cancer.

  16. RiseTx: testing the feasibility of a web application for reducing sedentary behavior among prostate cancer survivors receiving androgen deprivation therapy.

    PubMed

    Trinh, Linda; Arbour-Nicitopoulos, Kelly P; Sabiston, Catherine M; Berry, Scott R; Loblaw, Andrew; Alibhai, Shabbir M H; Jones, Jennifer M; Faulkner, Guy E

    2018-06-07

    Given the high levels of sedentary time and treatment-related side effects in prostate cancer survivors (PCS), interventions targeting sedentary behavior (SED) may be more sustainable compared to physical activity (PA). To examine the feasibility of a web-based intervention (RiseTx) for reducing SED and increasing moderate-to-vigorous physical activity (MVPA) among PCS undergoing ADT. Secondary outcomes include changes in SED, MVPA, light intensity PA, and quality of life. Forty-six PCS were recruited from two cancer centres in Toronto, Ontario, Canada between July 2015-October 2016. PCS were given an activity tracker (Jawbone), access to the RiseTx website program, and provided with a goal of increasing walking by 3000 daily steps above baseline levels over a 12-week period. A range of support tools were progressively released to reduce SED time (e.g., self-monitoring of steps) during the five-phase program. Objective measures of SED, MVPA, and daily steps were compared across the 12-week intervention using linear mixed models. Of the 46 PCS enrolled in the study, 42 completed the SED intervention, representing a 9% attrition rate. Measurement completion rates were 97 and 65% at immediately post-intervention and 12-week follow-up for all measures, respectively. Overall adherence was 64% for total number of logins (i.e., > 3 visits each week). Sample mean age was 73.2 ± 7.3 years, mean BMI was 28.0 ± 3.0 kg/m 2 , mean number of months since diagnosis was 93.6 ± 71.2, and 72% had ADT administered continuously. Significant reductions of 455.4 weekly minutes of SED time were observed at post-intervention (p = .005). Significant increases of + 44.1 for weekly minutes of MVPA was observed at immediately post-intervention (p = .010). There were significant increases in step counts of + 1535 steps from baseline to post-intervention (p < .001). RiseTx was successful in reducing SED and increasing MVPA in PCS. PCS were satisfied with the intervention and its components. Additional strategies may be needed though for maintenance of behavior change. The next step for RiseTx is to replicate these findings in a larger, randomized controlled trial that will have the potential for reducing sedentary time among PCS. NCT03321149 (ClinicalTrials.gov Identifier).

  17. Next Generation Extended Lagrangian Quantum-based Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Negre, Christian

    2017-06-01

    A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.

  18. IMEX HDG-DG: A coupled implicit hybridized discontinuous Galerkin and explicit discontinuous Galerkin approach for Euler systems on cubed sphere.

    NASA Astrophysics Data System (ADS)

    Kang, S.; Muralikrishnan, S.; Bui-Thanh, T.

    2017-12-01

    We propose IMEX HDG-DG schemes for Euler systems on cubed sphere. Of interest is subsonic flow, where the speed of the acoustic wave is faster than that of the nonlinear advection. In order to simulate these flows efficiently, we split the governing system into stiff part describing the fast waves and non-stiff part associated with nonlinear advection. The former is discretized implicitly with HDG method while explicit Runge-Kutta DG discretization is employed for the latter. The proposed IMEX HDG-DG framework: 1) facilitates high-order solution both in time and space; 2) avoids overly small time stepsizes; 3) requires only one linear system solve per time step; and 4) relatively to DG generates smaller and sparser linear system while promoting further parallelism owing to HDG discretization. Numerical results for various test cases demonstrate that our methods are comparable to explicit Runge-Kutta DG schemes in terms of accuracy, while allowing for much larger time stepsizes.

  19. Soft-Bake Purification of SWCNTs Produced by Pulsed Laser Vaporization

    NASA Technical Reports Server (NTRS)

    Yowell, Leonard; Nikolaev, Pavel; Gorelik, Olga; Allada, Rama Kumar; Sosa, Edward; Arepalli, Sivaram

    2013-01-01

    The "soft-bake" method is a simple and reliable initial purification step first proposed by researchers at Rice University for single-walled carbon nanotubes (SWCNT) produced by high-pressure carbon mon oxide disproportionation (HiPco). Soft-baking consists of annealing as-produced (raw) SWCNT, at low temperatures in humid air, in order to degrade the heavy graphitic shells that surround metal particle impurities. Once these shells are cracked open by the expansion and slow oxidation of the metal particles, the metal impurities can be digested through treatment with hydrochloric acid. The soft-baking of SWCNT produced by pulsed-laser vaporization (PLV) is not straightforward, because the larger average SWCNT diameters (.1.4 nm) and heavier graphitic shells surrounding metal particles call for increased temperatures during soft-bake. A part of the technology development focused on optimizing the temperature so that effective cracking of the graphitic shells is balanced with maintaining a reasonable yield, which was a critical aspect of this study. Once the ideal temperature was determined, a number of samples of raw SWCNT were purified using the soft-bake method. An important benefit to this process is the reduced time and effort required for soft-bake versus the standard purification route for SWCNT. The total time spent purifying samples by soft-bake is one week per batch, which equates to a factor of three reduction in the time required for purification as compared to the standard acid purification method. Reduction of the number of steps also appears to be an important factor in improving reproducibility of yield and purity of SWCNT, as small deviations are likely to get amplified over the course of a complicated multi-step purification process.

  20. A reassessment of mercury in silastic strain gauge plethysmography for microvascular permeability assessment in man.

    PubMed Central

    Gamble, J; Gartside, I B; Christ, F

    1993-01-01

    1. We have used non-invasive mercury in a silastic strain gauge system to assess the effect of pressure step size, on the time course of the rapid volume response (RVR) to occlusion pressure. We also obtained values for hydraulic conductance (Kf), isovolumetric venous pressure (Pvi) and venous pressure (Pv) in thirty-five studies on the legs of twenty-three supine control subjects. 2. The initial rapid volume response to small (9.53 +/- 0.45 mmHg, mean +/- S.E.M.) stepped increases in venous pressure, the rapid volume response, could be described by a single exponential of time constant 15.54 +/- 1.14 s. 3. Increasing the size of the pressure step, to 49.8 +/- 1.1 mmHg, gave a larger value for the RVR time constant (mean 77.3 +/- 11.6 s). 4. We propose that the pressure-dependent difference in the duration of the rapid volume response, in these two situations, might be due to a vascular smooth muscle-based mechanism, e.g. the veni-arteriolar reflex. 5. The mean (+/- S.E.M.) values for Kf, Pvi and Pv were 4.27 +/- 0.18 (units, ml min-1 (100 g)-1 mmHg-1 x 10(-3), 21.50 +/- 0.81 (units, mmHg) and 9.11 +/- 0.94 (units, mmHg), respectively. 6. During simultaneous assessment of these parameters in arms and legs, it was found that they did not differ significantly from one another. 7. We propose that the mercury strain gauge system offers a useful, non-invasive means of studying the mechanisms governing fluid filtration in human limbs. Images Fig. 1 PMID:8229810

  1. Temporal heating profile influence on the immediate bond strength following laser tissue soldering.

    PubMed

    Rabi, Yaron; Katzir, Abraham

    2010-07-01

    Bonding of tissues by laser heating is considered as a future alternative to sutures and staples. Increasing the post-operative bond strength remains a challenging issue for laser tissue bonding, especially in organs that have to sustain considerable tension or pressure. In this study, we investigated the influence of different temporal heating profiles on the strength of soldered incisions. The thermal damage following each heating procedure was quantified, in order to assess the effect of each heating profile on the thermal damage. Incisions in porcine bowel tissue strips (1 cmx4 cm) were soldered, using a 44% liquid albumin mixed with indocyanine green and a temperature controlled laser (830 nm) tissue bonding system. Heating was done either with a linear or a step temporal heating profile. The incisions were bonded by soldering at three points, separated by 2 mm. Set-point temperatures of T(set) = 60, 70, 80, 90, 100, 110, 150 degrees C and dwell times of t(d) = 10, 20, 30, 40 seconds were investigated. The bond strength was measured immediately following each soldering by applying a gradually increased tension on the tissue edges until the bond break. Bonds formed by linear heating were stronger than the ones formed by step heating: at T(set) = 80 degrees C the bonds were 40% stronger and at T(set) = 90 degrees C the bonds strength was nearly doubled. The bond strength difference between the heating methods was larger as T(set) increased. Linear heating produced stronger bonds than step heating. The difference in the bond strength was more pronounced at high set-point temperatures and short dwell times. The bond strength could be increased with either higher set-point temperature or a longer dwell time.

  2. Assessing dose–response effects of national essential medicine policy in China: comparison of two methods for handling data with a stepped wedge-like design and hierarchical structure

    PubMed Central

    Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun

    2017-01-01

    Objectives To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose–response effect) for data from a stepped-wedge design with a hierarchical structure. Design The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Setting Routinely and annually collected national data on China from 2008 to 2012. Participants 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Outcome measures Agreement and differences in estimates of dose–response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). Results The estimated dose–response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2–4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose–response among provinces, counties and facilities were estimated, and the ‘lowest’ or ‘highest’ units by their dose–response effects were pinpointed only by the multilevel RM model. Conclusions For examining dose–response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. PMID:28399510

  3. A randomized study of reinforcing ambulatory exercise in older adults

    PubMed Central

    Petry, Nancy M.; Andrade, Leonardo F.; Barry, Danielle; Byrne, Shannon

    2014-01-01

    Many older adults do not meet physical activity recommendations and suffer from health-related complications. Reinforcement interventions can have pronounced effects on promoting behavior change; this study evaluated the efficacy of a reinforcement intervention to enhance walking in older adults. Forty-five sedentary adults with mild to moderate hypertension were randomized to 12-week interventions consisting of pedometers and guidelines to walk 10,000 steps/day or that same intervention with chances to win $1-$100 prizes for meeting recommendations. Patients walked an average of about 4,000 steps/day at baseline. Throughout the intervention, participants in the reinforcement intervention met walking goals on 82.5% ± 25.8% of days versus 55.3% ± 37.1% of days in the control condition, p < .01. Even though steps walked increased significantly in both groups relative to baseline, participants in the reinforcement condition walked an average of about 2,000 more steps/day than participants in the control condition, p < .02. Beneficial effects of the reinforcement condition relative to the control condition persisted at a 24-week follow-up evaluation, p < .02, although steps/day were lower than during the intervention period in both groups. Participants in the reinforcement intervention also evidenced greater reductions in blood pressure and weight over time and improvements in fitness indices, ps < .05. This reinforcement-based intervention substantially increased walking and improved clinical parameters, suggesting that larger-scale evaluations of reinforcement-based interventions for enhancing active lifestyles in older adults are warranted. Ultimately, economic analyses may reveal reinforcement interventions to be cost-effective, especially in high-risk populations of older adults. PMID:24128075

  4. An extended validation of the last generation of particle finite element method for free surface flows

    NASA Astrophysics Data System (ADS)

    Gimenez, Juan M.; González, Leo M.

    2015-03-01

    In this paper, a new generation of the particle method known as Particle Finite Element Method (PFEM), which combines convective particle movement and a fixed mesh resolution, is applied to free surface flows. This interesting variant, previously described in the literature as PFEM-2, is able to use larger time steps when compared to other similar numerical tools which implies shorter computational times while maintaining the accuracy of the computation. PFEM-2 has already been extended to free surface problems, being the main topic of this paper a deep validation of this methodology for a wider range of flows. To accomplish this task, different improved versions of discontinuous and continuous enriched basis functions for the pressure field have been developed to capture the free surface dynamics without artificial diffusion or undesired numerical effects when different density ratios are involved. A collection of problems has been carefully selected such that a wide variety of Froude numbers, density ratios and dominant dissipative cases are reported with the intention of presenting a general methodology, not restricted to a particular range of parameters, and capable of using large time-steps. The results of the different free-surface problems solved, which include: Rayleigh-Taylor instability, sloshing problems, viscous standing waves and the dam break problem, are compared to well validated numerical alternatives or experimental measurements obtaining accurate approximations for such complex flows.

  5. Numerical analysis of transient laminar forced convection of nanofluids in circular ducts

    NASA Astrophysics Data System (ADS)

    Sert, İsmail Ozan; Sezer-Uzol, Nilay; Kakaç, Sadık

    2013-10-01

    In this study, forced convection heat transfer characteristics of nanofluids are investigated by numerical analysis of incompressible transient laminar flow in a circular duct under step change in wall temperature and wall heat flux. The thermal responses of the system are obtained by solving energy equation under both transient and steady-state conditions for hydro-dynamically fully-developed flow. In the analyses, temperature dependent thermo-physical properties are also considered. In the numerical analysis, Al2O3/water nanofluid is assumed as a homogenous single-phase fluid. For the effective thermal conductivity of nanofluids, Hamilton-Crosser model is used together with a model for Brownian motion in the analysis which takes the effects of temperature and the particle diameter into account. Temperature distributions across the tube for a step jump of wall temperature and also wall heat flux are obtained for various times during the transient calculations at a given location for a constant value of Peclet number and a particle diameter. Variations of thermal conductivity in turn, heat transfer enhancement is obtained at various times as a function of nanoparticle volume fractions, at a given nanoparticle diameter and Peclet number. The results are given under transient and steady-state conditions; steady-state conditions are obtained at larger times and enhancements are found by comparison to the base fluid heat transfer coefficient under the same conditions.

  6. In situ high-energy synchrotron radiation study of boehmite formation, growth, and phase transformation to alumina in sub- and supercritical water.

    PubMed

    Lock, Nina; Bremholm, Martin; Christensen, Mogens; Almer, Jonathan; Chen, Yu-Sheng; Iversen, Bo B

    2009-12-14

    Boehmite (AlOOH) nanoparticles have been synthesized in subcritical (300 bar, 350 degrees C) and supercritical (300 bar, 400 degrees C) water. The formation and growth of AlOOH nanoparticles were studied in situ by small- and wide-angle X-ray scattering (SAXS and WAXS) using 80 keV synchrotron radiation. The SAXS/WAXS data were measured simultaneously with a time resolution greater than 10 s and revealed the initial nucleation of amorphous particles takes place within 10 s with subsequent crystallization after 30 s. No diffraction signals were observed from Al(OH)(3) within the time resolution of the experiment, which shows that the dehydration step of the reaction is fast and the hydrolysis step rate-determining. The sizes of the crystalline particles were determined as a function of time. The overall size evolution patterns are similar in sub- and supercritical water, but the growth is faster and the final particle size larger under supercritical conditions. After approximately 5 min, the rate of particle growth decreases in both sub- and supercritical water. Heating of the boehmite nanoparticle suspension allowed an in situ X-ray investigation of the phase transformation of boehmite to aluminium oxide. Under the wet conditions used in this work, the transition starts at 530 degrees C and gives a two-phase product of hydrated and non-hydrated aluminium oxide.

  7. Calculations of individual doses for Techa River Cohort members exposed to atmospheric radioiodine from Mayak releases.

    PubMed

    Napier, Bruce A; Eslinger, Paul W; Tolstykh, Evgenia I; Vorobiova, Marina I; Tokareva, Elena E; Akhramenko, Boris N; Krivoschapov, Victor A; Degteva, Marina O

    2017-11-01

    Time-dependent thyroid doses were reconstructed for over 29,000 Techa River Cohort members living near the Mayak production facilities from 131 I released to the atmosphere for all relevant exposure pathways. The calculational approach uses four general steps: 1) construct estimates of releases of 131 I to the air from production facilities; 2) model the transport of 131 I in the air and subsequent deposition on the ground and vegetation; 3) model the accumulation of 131 I in environmental media; and 4) calculate individualized doses. The dose calculations are implemented in a Monte Carlo framework that produces best estimates and confidence intervals of dose time-histories. Other radionuclide contributors to thyroid dose were evaluated. The 131 I contribution was 75-99% of the thyroid dose. The mean total thyroid dose for cohort members was 193 mGy and the median was 53 mGy. Thyroid doses for about 3% of cohort members were larger than 1 Gy. About 7% of children born in 1940-1950 had doses larger than 1 Gy. The uncertainty in the 131 I dose estimates is low enough for this approach to be used in regional epidemiological studies. Copyright © 2017. Published by Elsevier Ltd.

  8. Calculations of individual doses for Techa River Cohort members exposed to atmospheric radioiodine from Mayak releases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, Bruce A.; Eslinger, Paul W.; Tolstykh, Evgenia I.

    Time-dependent thyroid doses were reconstructed for Techa River Cohort members living near the Mayak production facilities from 131I released to the atmosphere for all relevant exposure pathways. The calculational approach uses four general steps: 1) construct estimates of releases of 131I to the air from production facilities; 2) model the transport of 131I in the air and subsequent deposition on the ground and vegetation; 3) model the accumulation of 131I in soil, water, and food products (environmental media); and 4) calculate individual doses by matching appropriate lifestyle and consumption data for the individual to concentrations of 131I in environmental media.more » The dose calculations are implemented in a Monte Carlo framework that produces best estimates and confidence intervals of dose time-histories. The 131I contribution was 75-99% of the thyroid dose. The mean total thyroid dose for cohort members was 193 mGy and the median was 53 mGy. Thyroid doses for about 3% of cohort members were larger than 1 Gy. About 7% of children born in 1940-1950 had doses larger than 1 Gy. The uncertainty in the 131I dose estimates is low enough for this approach to be used in regional epidemiological studies.« less

  9. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model

    PubMed Central

    van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.

    2018-01-01

    The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620

  10. Space - The long range future

    NASA Technical Reports Server (NTRS)

    Von Puttkamer, J.

    1985-01-01

    Space exploration goals for NASA in the year 2000 time frame are examined. A lunar base would offer the opportunity for continuous earth viewing, further cosmogeochemical exploration and rudimentary steps at self-sufficiency in space. The latter two factors are also compelling reasons to plan a manned Mars base. Furthermore, competition and cooperation in a Mars mission and further interplanetary exploration is an attractive substitute for war. The hardware requirements for various configurations of Mars missions are briefly addressed, along with other, unmanned missions to the asteroid belt, Mercury, Venus, Jupiter and the moons of Jupiter and Saturn. Finally, long-range technological requirements for providing adequate living/working facilities for larger human populations in Space Station environments are summarized.

  11. Role of delay-based reward in the spatial cooperation

    NASA Astrophysics Data System (ADS)

    Wang, Xu-Wen; Nie, Sen; Jiang, Luo-Luo; Wang, Bing-Hong; Chen, Shi-Ming

    2017-01-01

    Strategy selection in games, a typical decision making, usually brings noticeable reward for players which have discounted value if the delay appears. The discounted value is measure: earning sooner with a small reward or later with a delayed larger reward. Here, we investigate effects of delayed rewards on the cooperation in structured population. It is found that delayed reward supports the spreading of cooperation in square lattice, small-world and random networks. In particular, intermediate reward differences between delays impel the highest cooperation level. Interestingly, cooperative individuals with the same delay time steps form clusters to resist the invasion of defects, and cooperative individuals with lowest delay reward survive because they form the largest clusters in the lattice.

  12. Changes in leg spring behaviour, plantar loading and foot mobility magnitude induced by an exhaustive treadmill run in adolescent middle-distance runners.

    PubMed

    Fourchet, François; Girard, Olivier; Kelly, Luke; Horobeanu, Cosmin; Millet, Grégoire P

    2015-03-01

    This study aimed to determine adjustments in spring-mass model characteristics, plantar loading and foot mobility induced by an exhaustive run. Within-participants repeated measures. Eleven highly-trained adolescent middle-distance runners ran to exhaustion on a treadmill at a constant velocity corresponding to 95% of velocity associated with VO₂max (17.8 ± 1.4 kmh(-1), time to exhaustion=8.8 ± 3.4 min). Contact time obtained from plantar pressure sensors was used to estimate spring-mass model characteristics, which were recorded (during 30 s) 1 min after the start and prior to exhaustion using pressure insoles. Foot mobility magnitude (a composite measure of vertical and medial-lateral mobility of the midfoot) was measured before and after the run. Mean contact area (foot to ground), contact time, peak vertical ground reaction force, centre of mass vertical displacement and leg compression increased significantly with fatigue, while flight time, leg stiffness and mean pressure decreased. Leg stiffness decreased because leg compression increased to a larger extent than peak vertical ground reaction forces. Step length, step frequency and foot mobility magnitude did not change at exhaustion. The stride pattern of adolescents when running on a treadmill at high constant velocity deteriorates near exhaustion, as evidenced by impaired leg-spring behaviour (leg stiffness) and altered plantar loading. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  13. Dynamic subcellular partitioning of the nucleolar transcription factor TIF-IA under ribotoxic stress.

    PubMed

    Szymański, Jedrzej; Mayer, Christine; Hoffmann-Rohrer, Urs; Kalla, Claudia; Grummt, Ingrid; Weiss, Matthias

    2009-07-01

    TIF-IA is a basal transcription factor of RNA polymerase I (Pol I) that is a major target of the JNK2 signaling pathway in response to ribotoxic stress. Using advanced fluorescence microscopy and kinetic modeling we elucidated the subcellular localization of TIF-IA and its exchange dynamics between the nucleolus, nucleoplasm and cytoplasm upon ribotoxic stress. In steady state, the majority of (GFP-tagged) TIF-IA was in the cytoplasm and the nucleus, a minor portion (7%) localizing to the nucleoli. We observed a rapid shuttling of GFP-TIF-IA between the different cellular compartments with a mean residence time of approximately 130 s in the nucleus and only approximately 30 s in the nucleoli. The import rate from the cytoplasm to the nucleus was approximately 3-fold larger than the export rate, suggesting an importin/exportin-mediated transport rather than a passive diffusion. Upon ribotoxic stress, GFP-TIF-IA was released from the nucleoli with a half-time of approximately 24 min. Oxidative stress and inhibition of protein synthesis led to a relocation of GFP-TIF-IA with slower kinetics while osmotic stress had no effect. The observed relocation was much slower than the nucleo-cytoplasmic and nucleus-nucleolus exchange rates of GFP-TIF-IA, indicating a time-limiting step upstream of the JNK2 pathway. In support of this, time-course experiments on the activity of JNK2 revealed the activation of the JNK kinase as the rate-limiting step.

  14. Multiscale time-dependent density functional theory: Demonstration for plasmons.

    PubMed

    Jiang, Jiajian; Abi Mansour, Andrew; Ortoleva, Peter J

    2017-08-07

    Plasmon properties are of significant interest in pure and applied nanoscience. While time-dependent density functional theory (TDDFT) can be used to study plasmons, it becomes impractical for elucidating the effect of size, geometric arrangement, and dimensionality in complex nanosystems. In this study, a new multiscale formalism that addresses this challenge is proposed. This formalism is based on Trotter factorization and the explicit introduction of a coarse-grained (CG) structure function constructed as the Weierstrass transform of the electron wavefunction. This CG structure function is shown to vary on a time scale much longer than that of the latter. A multiscale propagator that coevolves both the CG structure function and the electron wavefunction is shown to bring substantial efficiency over classical propagators used in TDDFT. This efficiency follows from the enhanced numerical stability of the multiscale method and the consequence of larger time steps that can be used in a discrete time evolution. The multiscale algorithm is demonstrated for plasmons in a group of interacting sodium nanoparticles (15-240 atoms), and it achieves improved efficiency over TDDFT without significant loss of accuracy or space-time resolution.

  15. Dimensional accuracy of resultant casts made by a monophase, one-step and two-step, and a novel two-step putty/light-body impression technique: an in vitro study.

    PubMed

    Caputi, Sergio; Varvara, Giuseppe

    2008-04-01

    Dimensional accuracy when making impressions is crucial to the quality of fixed prosthodontic treatment, and the impression technique is a critical factor affecting this accuracy. The purpose of this in vitro study was to compare the dimensional accuracy of a monophase, 1- and 2-step putty/light-body, and a novel 2-step injection impression technique. A stainless steel model with 2 abutment preparations was fabricated, and impressions were made 15 times with each technique. All impressions were made with an addition-reaction silicone impression material (Aquasil) and a stock perforated metal tray. The monophase impressions were made with regular body material. The 1-step putty/light-body impressions were made with simultaneous use of putty and light-body materials. The 2-step putty/light-body impressions were made with 2-mm-thick resin-prefabricated copings. The 2-step injection impressions were made with simultaneous use of putty and light-body materials. In this injection technique, after removing the preliminary impression, a hole was made through the polymerized material at each abutment edge, to coincide with holes present in the stock trays. Extra-light-body material was then added to the preliminary impression and further injected through the hole after reinsertion of the preliminary impression on the stainless steel model. The accuracy of the 4 different impression techniques was assessed by measuring 3 dimensions (intra- and interabutment) (5-mum accuracy) on stone casts poured from the impressions of the stainless steel model. The data were analyzed by 1-way ANOVA and Student-Newman-Keuls test (alpha=.05). The stone dies obtained with all the techniques had significantly larger dimensions as compared to those of the stainless steel model (P<.01). The order for highest to lowest deviation from the stainless steel model was: monophase, 1-step putty/light body, 2-step putty/light body, and 2-step injection. Significant differences among all of the groups for both absolute dimensions of the stone dies, and their percent deviations from the stainless steel model (P<.01), were noted. The 2-step putty/light-body and 2-step injection techniques were the most dimensionally accurate impression methods in terms of resultant casts.

  16. Modeling Nucleation and Grain Growth in the Solar Nebula: Initial Progress Report

    NASA Technical Reports Server (NTRS)

    Nuth, Joseph A.; Paquette, J. A.; Ferguson, F. T.

    2010-01-01

    The primitive solar nebula was a violent and chaotic environment where high energy collisions, lightning, shocks and magnetic re-connection events rapidly vaporized some fraction of nebular dust, melted larger particles while leaving the largest grains virtually undisturbed. At the same time, some tiny grains containing very easily disturbed noble gas signatures (e.g., small, pre-solar graphite or SiC particles) never experienced this violence, yet can be found directly adjacent to much larger meteoritic components (chondrules or CAIs) that did. Additional components in the matrix of the most primitive carbonaceous chondrites and in some chondritic porous interplanetary dust particles include tiny nebular condensates, aggregates of condensates and partially annealed aggregates. Grains formed in violent transient events in the solar nebula did not come to equilibrium with their surroundings. To understand the formation and textures of these materials as well as their nebular abundances we must rely on Nucleation Theory and kinetic models of grain growth, coagulation and annealing. Such models have been very uncertain in the past: we will discuss the steps we are taking to increase their reliability.

  17. Comparison of Electron Imaging Modes for Dimensional Measurements in the Scanning Electron Microscope.

    PubMed

    Postek, Michael T; Vladár, András E; Villarrubia, John S; Muto, Atsushi

    2016-08-01

    Dimensional measurements from secondary electron (SE) images were compared with those from backscattered electron (BSE) and low-loss electron (LLE) images. With the commonly used 50% threshold criterion, the lines consistently appeared larger in the SE images. As the images were acquired simultaneously by an instrument with the capability to operate detectors for both signals at the same time, the differences cannot be explained by the assumption that contamination or drift between images affected the SE, BSE, or LLE images differently. Simulations with JMONSEL, an electron microscope simulator, indicate that the nanometer-scale differences observed on this sample can be explained by the different convolution effects of a beam with finite size on signals with different symmetry (the SE signal's characteristic peak versus the BSE or LLE signal's characteristic step). This effect is too small to explain the >100 nm discrepancies that were observed in earlier work on different samples. Additional modeling indicates that those discrepancies can be explained by the much larger sidewall angles of the earlier samples, coupled with the different response of SE versus BSE/LLE profiles to such wall angles.

  18. Design and implementation of a twin-family database for behavior genetics and genomics studies.

    PubMed

    Boomsma, Dorret I; Willemsen, Gonneke; Vink, Jacqueline M; Bartels, Meike; Groot, Paul; Hottenga, Jouke Jan; van Beijsterveldt, C E M Toos; Stroet, Therese; van Dijk, Rob; Wertheim, Rien; Visser, Marco; van der Kleij, Frank

    2008-06-01

    In this article we describe the design and implementation of a database for extended twin families. The database does not focus on probands or on index twins, as this approach becomes problematic when larger multigenerational families are included, when more than one set of multiples is present within a family, or when families turn out to be part of a larger pedigree. Instead, we present an alternative approach that uses a highly flexible notion of persons and relations. The relations among the subjects in the database have a one-to-many structure, are user-definable and extendible and support arbitrarily complicated pedigrees. Some additional characteristics of the database are highlighted, such as the storage of historical data, predefined expressions for advanced queries, output facilities for individuals and relations among individuals and an easy-to-use multi-step wizard for contacting participants. This solution presents a flexible approach to accommodate pedigrees of arbitrary size, multiple biological and nonbiological relationships among participants and dynamic changes in these relations that occur over time, which can be implemented for any type of multigenerational family study.

  19. Analysis of the sweeped actuator line method

    DOE PAGES

    Nathan, Jörn; Masson, Christian; Dufresne, Louis; ...

    2015-10-16

    The actuator line method made it possible to describe the near wake of a wind turbine more accurately than with the actuator disk method. Whereas the actuator line generates the helicoidal vortex system shed from the tip blades, the actuator disk method sheds a vortex sheet from the edge of the rotor plane. But with the actuator line come also temporal and spatial constraints, such as the need for a much smaller time step than with actuator disk. While the latter one only has to obey the Courant-Friedrichs-Lewy condition, the former one is also restricted by the grid resolution andmore » the rotor tip-speed. Additionally the spatial resolution has to be finer for the actuator line than with the actuator disk, for well resolving the tip vortices. Therefore this work is dedicated to examining a method in between of actuator line and actuator disk, which is able to model the transient behavior, such as the rotating blades, but which also relaxes the temporal constraint. Therefore a larger time-step is used and the blade forces are swept over a certain area. As a result, the main focus of this article is on the aspect of the blade tip vortex generation in comparison with the standard actuator line and actuator disk.« less

  20. Analysis of the sweeped actuator line method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nathan, Jörn; Masson, Christian; Dufresne, Louis

    The actuator line method made it possible to describe the near wake of a wind turbine more accurately than with the actuator disk method. Whereas the actuator line generates the helicoidal vortex system shed from the tip blades, the actuator disk method sheds a vortex sheet from the edge of the rotor plane. But with the actuator line come also temporal and spatial constraints, such as the need for a much smaller time step than with actuator disk. While the latter one only has to obey the Courant-Friedrichs-Lewy condition, the former one is also restricted by the grid resolution andmore » the rotor tip-speed. Additionally the spatial resolution has to be finer for the actuator line than with the actuator disk, for well resolving the tip vortices. Therefore this work is dedicated to examining a method in between of actuator line and actuator disk, which is able to model the transient behavior, such as the rotating blades, but which also relaxes the temporal constraint. Therefore a larger time-step is used and the blade forces are swept over a certain area. As a result, the main focus of this article is on the aspect of the blade tip vortex generation in comparison with the standard actuator line and actuator disk.« less

  1. Intrinsically water-repellent copper oxide surfaces; An electro-crystallization approach

    NASA Astrophysics Data System (ADS)

    Akbari, Raziyeh; Ramos Chagas, Gabriela; Godeau, Guilhem; Mohammadizadeh, Mohammadreza; Guittard, Frédéric; Darmanin, Thierry

    2018-06-01

    Use of metal oxide thin layers is increased due to their good durability under environmental conditions. In this work, the repeatable nanostructured crystalite Cu2O thin films, developed by electrodeposition method without any physical and chemical modifications, demonstrate good hydrophobicity. Copper (I) oxide (Cu2O) layers were fabricated on gold/Si(1 0 0) substrates by different electrodeposition methods i.e. galvanostatic deposition, cyclic voltammetry, and pulse potentiostatic deposition and using copper sulfate (in various concentrations) as a precursor. The greatest crystalline face on prepared Cu2O samples is (1 1 1) which is the most hydrophobic facet of Cu2O cubic structure. Indeed, different crystallite structures such as nanotriangles and truncated octahedrons were formed on the surface for various electrodeposition methods. The increase of the contact angle (θw) measured by the rest time, reaching to about 135°, was seen at different rates and electrodeposition methods. In addition, two-step deposition surfaces were also prepared by applying two of the mentioned methods, alternatively. In general, the morphology of the two-step deposition surfaces showed some changes compared to that of one-step samples, allowing the formation of different crystallite shapes. Moreover, the wettability behavior showd the larger θw of the two-step deposition layers compared to the related one-step deposition layers. Therefore, the highest observed θw was related to the one of two-step deposition layers due to the creation of small octahedral structures on the surface, having narrow and deep valleys. However, there was an exception which was due to the resulted big structures and broad valleys on the surface. So, it is possible to engineer different crystallites shapes using the proposed two-step deposition method. It is expected that hydrophobic crystallite thin films can be used in environmental and electronic applications to save energy and materials properties.

  2. A double-blind randomized discontinuation phase II study of sorafenib (BAY 43-9006) in previously treated non-small cell lung cancer patients: Eastern Cooperative Oncology Group study E2501

    PubMed Central

    Wakelee, Heather A.; Lee, Ju-Whei; Hanna, Nasser H.; Traynor, Anne M.; Carbone, David P.; Schiller, Joan H.

    2012-01-01

    Introduction Sorafenib is a raf kinase and angiogenesis inhibitor with activity in multiple cancers. This phase II study in heavily pretreated non-small cell lung cancer (NSCLC) patients (≥ two prior therapies) utilized a randomized discontinuation design. Methods Patients received 400 mg of sorafenib orally twice daily for two cycles (two months) (Step 1). Responding patients on Step 1 continued on sorafenib; progressing patients went off study, and patients with stable disease were randomized to placebo or sorafenib (Step 2), with crossover from placebo allowed upon progression. The primary endpoint of this study was the proportion of patients having stable or responding disease two months after randomization. Results : There were 299 patients evaluated for Step 1 with 81 eligible patients randomized on Step 2 who received sorafenib (n=50) or placebo (n=31). The two-month disease control rates following randomization were 54% and 23% for patients initially receiving sorafenib and placebo respectively, p=0.005. The hazard ratio for progression on Step 2 was 0.51 (95% CI 0.30, 0.87, p=0.014) favoring sorafenib. A trend in favor of overall survival with sorafenib was also observed (13.7 versus 9.0 months from time of randomization), HR 0.67 (95% CI 0.40-1.11), p=0.117. A dispensing error occurred which resulted in unblinding of some patients, but not before completion of the 8 week initial step 2 therapy. Toxicities were manageable and as expected. Conclusions : The results of this randomized discontinuation trial suggest that sorafenib has single agent activity in a heavily pretreated, enriched patient population with advanced NSCLC. These results support further investigation with sorafenib as a single agent in larger, randomized studies in NSCLC. PMID:22982658

  3. Using a Spectral Method to Evaluate Hyporheic Exchange and its Effect on Reach Scale Nitrate Removal.

    NASA Astrophysics Data System (ADS)

    Moren, I.; Worman, A. L. E.; Riml, J.

    2017-12-01

    Previous studies have shown that hyporheic exchange processes can be of great importance for the transport, retention and mass removal of nutrients in streams. Specifically, the flow of surface water through the hyporheic zone enhances redox-sensitive reactions such as coupled nitrification-denitrification. This self-cleaning capacity of streams can be utilized in stream restoration projects aiming to improve water quality by reconstructing the geomorphology of the streams. To optimize the effect of restoration actions we need quantitative understanding of the linkage between stream geomorphology, hyporheic exchange processes and the desired water quality targets. Here we propose an analytical, spectral methodology to evaluate how different stream geomorphologies induce hyporheic exchange on a wide range of spatial and temporal scales. Measurements of streambed topographies and surface water profiles from agricultural streams were used to calculate the average hyporheic exchange velocity and residence times and the result was compared with in-stream tracer test. Furthermore, the hyporheic exchange induced by steps in the surface water profile was derived as a comparison of the theoretical capacity of the system. Based on differences in hyporheic exchange, the mass removal of nitrate could be derived for the different geomorphologies. The maximum nitrate mass removal was found to be related to a specific Damkhöler number, which reflects that the mass removal can be either reaction or transport controlled. Therefore, although hyporheic exchange induced by steps in the surface water profile was generally larger than the hyporheic exchange in the observed natural reaches, this would not necessarily lead a larger nitrate mass removal provided that the hyporheic residence times are not long enough to facilitate denitrification processes. The study illustrates the importance to investigate a stream thoroughly before any remediation actions are implemented, specifically to evaluate if the mass removal is reaction or transport controlled.

  4. Development and validation of trauma surgical skills metrics: Preliminary assessment of performance after training.

    PubMed

    Shackelford, Stacy; Garofalo, Evan; Shalin, Valerie; Pugh, Kristy; Chen, Hegang; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin F

    2015-07-01

    Maintaining trauma-specific surgical skills is an ongoing challenge for surgical training programs. An objective assessment of surgical skills is needed. We hypothesized that a validated surgical performance assessment tool could detect differences following a training intervention. We developed surgical performance assessment metrics based on discussion with expert trauma surgeons, video review of 10 experts and 10 novice surgeons performing three vascular exposure procedures and lower extremity fasciotomy on cadavers, and validated the metrics with interrater reliability testing by five reviewers blinded to level of expertise and a consensus conference. We tested these performance metrics in 12 surgical residents (Year 3-7) before and 2 weeks after vascular exposure skills training in the Advanced Surgical Skills for Exposure in Trauma (ASSET) course. Performance was assessed in three areas as follows: knowledge (anatomic, management), procedure steps, and technical skills. Time to completion of procedures was recorded, and these metrics were combined into a single performance score, the Trauma Readiness Index (TRI). Wilcoxon matched-pairs signed-ranks test compared pretraining/posttraining effects. Mean time to complete procedures decreased by 4.3 minutes (from 13.4 minutes to 9.1 minutes). The performance component most improved by the 1-day skills training was procedure steps, completion of which increased by 21%. Technical skill scores improved by 12%. Overall knowledge improved by 3%, with 18% improvement in anatomic knowledge. TRI increased significantly from 50% to 64% with ASSET training. Interrater reliability of the surgical performance assessment metrics was validated with single intraclass correlation coefficient of 0.7 to 0.98. A trauma-relevant surgical performance assessment detected improvements in specific procedure steps and anatomic knowledge taught during a 1-day course, quantified by the TRI. ASSET training reduced time to complete vascular control by one third. Future applications include assessing specific skills in a larger surgeon cohort, assessing military surgical readiness, and quantifying skill degradation with time since training.

  5. Immersive visualization of rail simulation data.

    DOT National Transportation Integrated Search

    2016-01-01

    The prime objective of this project was to create scientific, immersive visualizations of a Rail-simulation. This project is a part of a larger initiative that consists of three distinct parts. The first step consists of performing a finite element a...

  6. 76 FR 71066 - HUD Draft Environmental Justice Strategy, Extension of Public Comment Period

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ... may access this number through TTY by calling the toll-free Federal Relay Service at (800) 877-8339... step in a larger Administration-wide effort to ensure strong protection from environmental and health...

  7. Direct Fabrication of the Graphene-Based Composite for Cancer Phototherapy through Graphite Exfoliation with a Photosensitizer.

    PubMed

    Liu, Gang; Qin, Hongmei; Amano, Tsukuru; Murakami, Takashi; Komatsu, Naoki

    2015-10-28

    We report on the application of pristine graphene as a drug carrier for phototherapy (PT). The loading of a photosensitizer, chlorin e6 (Ce6), was achieved simply by sonication of Ce6 and graphite in an aqueous solution. During the loading process, graphite was gradually exfoliated to graphene to give its composite with Ce6 (G-Ce6). This one-step approach is considered to be superior to the graphene oxide (GO)-based composites, which required pretreatment of graphite by strong oxidation. Additionally, the directly exfoliated graphene ensured a high drug loading capacity, 160 wt %, which is about 10 times larger than that of the functionalized GO. Furthermore, the Ce6 concentration for killing cells by G-Ce6 is 6-75 times less than that of the other Ce6 composites including GO-Ce6.

  8. Capacitive deionization on-chip as a method for microfluidic sample preparation.

    PubMed

    Roelofs, Susan H; Kim, Bumjoo; Eijkel, Jan C T; Han, Jongyoon; van den Berg, Albert; Odijk, Mathieu

    2015-03-21

    Desalination as a sample preparation step is essential for noise reduction and reproducibility of mass spectrometry measurements. A specific example is the analysis of proteins for medical research and clinical applications. Salts and buffers that are present in samples need to be removed before analysis to improve the signal-to-noise ratio. Capacitive deionization is an electrostatic desalination (CDI) technique which uses two porous electrodes facing each other to remove ions from a solution. Upon the application of a potential of 0.5 V ions migrate to the electrodes and are stored in the electrical double layer. In this article we demonstrate CDI on a chip, and desalinate a solution by the removal of 23% of Na(+) and Cl(-) ions, while the concentration of a larger molecule (FITC-dextran) remains unchanged. For the first time impedance spectroscopy is introduced to monitor the salt concentration in situ in real-time in between the two desalination electrodes.

  9. Reply to Comment by Roques et al. on "Base Flow Recession from Unsaturated-Saturated Porous Media considering Lateral Unsaturated Discharge and Aquifer Compressibility"

    NASA Astrophysics Data System (ADS)

    Liang, Xiuyu; Zhan, Hongbin; Zhang, You-Kuan; Schilling, Keith

    2018-04-01

    Roques et al. (https://doi.org/10.1002/2017WR022085) claims that they have proposed an exponential time step (ETS) method to improve the computing method of Liang et al. (https://doi.org/10.1002/2017WR020938) which used a constant time step (CTS) method on the derivative for dQ/dt in field data, where Q is the base flow discharge and t is the time since the start of base flow recession. This reply emphasizes that the main objective of Liang et al. (https://doi.org/10.1002/2017WR020938) was to develop an analytical model to investigate the effects of the unsaturated flow on base flow recession, not on the data interpretation methods. The analytical model indicates that the base flow recession hydrograph behaves as dQ/dt ˜aQb with the exponent b close to 1 at late times, which is consistent with previous theoretical models. The model of Liang et al. (https://doi.org/10.1002/2017WR020938) was applied to field data where the derivative of dQ/dt was computed using the CTS method, a method that has been widely adopted in previous studies. The ETS method proposed by Roques et al. (https://doi.org/10.1016/j.advwatres.2017.07.013) appears to be a good alternative but its accuracy needs further validation. Using slopes to fit field data as proposed by Roques et al. (https://doi.org/10.1002/2017WR022085) appears to match data satisfactorily at early times whereas it performs less satisfactorily at late times and leads to the exponent b being obviously larger than 1.

  10. An efficient microwave-assisted synthesis method for the production of water soluble amine-terminated Si nanoparticles.

    PubMed

    Atkins, Tonya M; Louie, Angelique Y; Kauzlarich, Susan M

    2012-07-27

    Silicon nanoparticles can be considered a green material, especially when prepared via a microwave-assisted method without the use of highly reactive reducing agents or hydrofluoric acid. A simple solution synthesis of hydrogen-terminated Si- and Mn-doped Si nanoparticles via microwave-assisted synthesis is demonstrated. The reaction of the Zintl salt, Na(4)Si(4), or Mn-doped Na(4)Si(4), Na(4)Si(4(Mn)), with ammonium bromide, NH(4)Br, produces small dispersible nanoparticles along with larger particles that precipitate. Allylamine and 1-amino-10-undecene were reacted with the hydrogen-terminated Si nanoparticles to provide water solubility and stability. A one-pot, single-reaction process and a one-pot, two-step reaction process were investigated. Details of the microwave-assisted process are provided, with the optimal synthesis being the one-pot, two-step reaction procedure and a total time of about 15 min. The nanoparticles were characterized by transmission electron microscopy (TEM), x-ray diffraction, and fluorescence spectroscopies. The microwave-assisted method reliably produces a narrow size distribution of Si nanoparticles in solution.

  11. Linear and exponential TAIL-PCR: a method for efficient and quick amplification of flanking sequences adjacent to Tn5 transposon insertion sites.

    PubMed

    Jia, Xianbo; Lin, Xinjian; Chen, Jichen

    2017-11-02

    Current genome walking methods are very time consuming, and many produce non-specific amplification products. To amplify the flanking sequences that are adjacent to Tn5 transposon insertion sites in Serratia marcescens FZSF02, we developed a genome walking method based on TAIL-PCR. This PCR method added a 20-cycle linear amplification step before the exponential amplification step to increase the concentration of the target sequences. Products of the linear amplification and the exponential amplification were diluted 100-fold to decrease the concentration of the templates that cause non-specific amplification. Fast DNA polymerase with a high extension speed was used in this method, and an amplification program was used to rapidly amplify long specific sequences. With this linear and exponential TAIL-PCR (LETAIL-PCR), we successfully obtained products larger than 2 kb from Tn5 transposon insertion mutant strains within 3 h. This method can be widely used in genome walking studies to amplify unknown sequences that are adjacent to known sequences.

  12. Anion Effects on the Ion Exchange Process and the Deformation Property of Ionic Polymer Metal Composite Actuators

    PubMed Central

    Aoyagi, Wataru; Omiya, Masaki

    2016-01-01

    An ionic polymer-metal composite (IPMC) actuator composed of a thin perfluorinated ionomer membrane with electrodes plated on both surfaces undergoes a large bending motion when a low electric field is applied across its thickness. Such actuators are soft, lightweight, and able to operate in solutions and thus show promise with regard to a wide range of applications, including MEMS sensors, artificial muscles, biomimetic systems, and medical devices. However, the variations induced by changing the type of anion on the device deformation properties are not well understood; therefore, the present study investigated the effects of different anions on the ion exchange process and the deformation behavior of IPMC actuators with palladium electrodes. Ion exchange was carried out in solutions incorporating various anions and the actuator tip displacement in deionized water was subsequently measured while applying a step voltage. In the step voltage response measurements, larger anions such as nitrate or sulfate led to a more pronounced tip displacement compared to that obtained with smaller anions such as hydroxide or chloride. In AC impedance measurements, larger anions generated greater ion conductivity and a larger double-layer capacitance at the cathode. Based on these mechanical and electrochemical measurements, it is concluded that the presence of larger anions in the ion exchange solution induces a greater degree of double-layer capacitance at the cathode and results in enhanced tip deformation of the IPMC actuators. PMID:28773599

  13. Effects of high-intensity interval training on cardiometabolic risk in overweight and obese African-American women: a pilot study.

    PubMed

    Hornbuckle, Lyndsey M; McKenzie, Michael J; Whitt-Glover, Melicia C

    2017-03-01

    Little is known about high-intensity interval training (HIIT) in African-American (AA) women. The purpose of this pilot study was to evaluate the effects of HIIT and steady-state (SS) exercise on cardiometabolic risk factors in young AA women. A 16-week exercise intervention was conducted 3x/week. Twenty-seven AA women were randomized to SS (n = 11; 32 continuous minutes of treadmill walking at 60-70% of maximum heart rate (HR max )), or HIIT (n = 16; 32 min of treadmill HIIT alternating 3 min at 60-70% of HR max with 1 min at 80-90% of HR max ). Two-way repeated measures ANOVA with intention-to-treat analysis was used to identify changes between groups. Significance was accepted at P ≤ 0.05. Of the 27 women who entered the study (age: 30.5 ± 6.8 years; BMI: 35.1 ± 5.1 kg/m 2 ; 5274 ± 1646 baseline steps/day), 14 completed the intervention. HIIT significantly decreased waist circumference (107.0 ± 11.3 to 105.1 ± 11.9 cm) compared to SS, which showed no change. There was a significant time effect for steps where HIIT increased steps/day (5334 ± 1586 to 7604 ± 1817 steps/day), and SS had no change. There were no significant changes in either group for any other measurements. HIIT was more effective at reducing waist circumference and increasing daily steps/day than SS treadmill exercise over 16 weeks. Further research in a larger sample is indicated to evaluate the effects of each protocol on cardiometabolic risk factors.

  14. Two-step growth of two-dimensional WSe 2/MoSe 2 heterostructures

    DOE PAGES

    Gong, Yongji; Lei, Sidong; Lou, Jun; ...

    2015-08-03

    Two dimensional (2D) materials have attracted great attention due to their unique properties and atomic thickness. Although various 2D materials have been successfully synthesized with different optical and electrical properties, a strategy for fabricating 2D heterostructures must be developed in order to construct more complicated devices for practical applications. Here we demonstrate for the first time a two-step chemical vapor deposition (CVD) method for growing transition-metal dichalcogenide (TMD) heterostructures, where MoSe 2 was synthesized first and followed by an epitaxial growth of WSe 2 on the edge and on the top surface of MoSe 2. Compared to previously reported one-stepmore » growth methods, this two-step growth has the capability of spatial and size control of each 2D component, leading to much larger (up to 169 μm) heterostructure size, and cross-contamination can be effectively minimized. Furthermore, this two-step growth produces well-defined 2H and 3R stacking in the WSe 2/MoSe 2 bilayer regions and much sharper in-plane interfaces than the previously reported MoSe 2/WSe 2 heterojunctions obtained from one-step growth methods. The resultant heterostructures with WSe 2/MoSe 2 bilayer and the exposed MoSe 2 monolayer display rectification characteristics of a p-n junction, as revealed by optoelectronic tests, and an internal quantum efficiency of 91% when functioning as a photodetector. As a result, a photovoltaic effect without any external gates was observed, showing incident photon to converted electron (IPCE) efficiencies of approximately 0.12%, providing application potential in electronics and energy harvesting.« less

  15. Topography-guided transepithelial PRK after intracorneal ring segments implantation and corneal collagen CXL in a three-step procedure for keratoconus.

    PubMed

    Coskunseven, Efekan; Jankov, Mirko R; Grentzelos, Michael A; Plaka, Argyro D; Limnopoulou, Aliki N; Kymionis, George D

    2013-01-01

    To present the results of topography-guided transepithelial photorefractive keratectomy (PRK) after intracorneal ring segments implantation followed by corneal collagen cross-linking (CXL) for keratoconus. In this prospective case series, 10 patients (16 eyes) with progressive keratoconus were included. All patients underwent topography-guided transepithelial PRK after Keraring intracorneal ring segments (Mediphacos Ltda) implantation, followed by CXL treatment. The follow-up period was 6 months after the last procedure for all patients. Time interval between both intracorneal ring segments implantation and CXL and between CXL and topography-guided transepithelial PRK was 6 months. LogMAR mean uncorrected distance visual acuity and mean corrected distance visual acuity were significantly improved (P<.05) from 1.14±0.36 and 0.75±0.24 preoperatively to 0.25±0.13 and 0.13±0.06 after the completion of the three-step procedure, respectively. Mean spherical equivalent refraction was significantly reduced (P<.05) from -5.66±5.63 diopters (D) preoperatively to -0.98±2.21 D after the three-step procedure. Mean steep and flat keratometry values were significantly reduced (P<.05) from 54.65±5.80 D and 47.80±3.97 D preoperatively to 45.99±3.12 D and 44.69±3.19 D after the three-step procedure, respectively. Combined topography-guided transepithelial PRK with intracorneal ring segments implantation and CXL in a three-step procedure seems to be an effective, promising treatment sequence offering patients a functional visual acuity and ceasing progression of the ectatic disorder. A longer follow-up and larger case series are necessary to thoroughly evaluate safety, stability, and efficacy of this innovative procedure. Copyright 2013, SLACK Incorporated.

  16. Type Ia supernova Hubble residuals and host-galaxy properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, A. G.; Aldering, G.; Aragon, C.

    2014-03-20

    Kim et al. introduced a new methodology for determining peak-brightness absolute magnitudes of type Ia supernovae from multi-band light curves. We examine the relation between their parameterization of light curves and Hubble residuals, based on photometry synthesized from the Nearby Supernova Factory spectrophotometric time series, with global host-galaxy properties. The K13 Hubble residual step with host mass is 0.013 ± 0.031 mag for a supernova subsample with data coverage corresponding to the K13 training; at <<1σ, the step is not significant and lower than previous measurements. Relaxing the data coverage requirement of the Hubble residual step with the host massmore » is 0.045 ± 0.026 mag for the larger sample; a calculation using the modes of the distributions, less sensitive to outliers, yields a step of 0.019 mag. The analysis of this article uses K13 inferred luminosities, as distinguished from previous works that use magnitude corrections as a function of SALT2 color and stretch parameters: steps at >2σ significance are found in SALT2 Hubble residuals in samples split by the values of their K13 x(1) and x(2) light-curve parameters. x(1) affects the light-curve width and color around peak (similar to the Δm {sub 15} and stretch parameters), and x(2) affects colors, the near-UV light-curve width, and the light-curve decline 20-30 days after peak brightness. The novel light-curve analysis, increased parameter set, and magnitude corrections of K13 may be capturing features of SN Ia diversity arising from progenitor stellar evolution.« less

  17. A new step aeration approach towards the improvement of nitrogen removal in a full scale Carrousel oxidation ditch.

    PubMed

    Jin, Pengkang; Wang, Xianbao; Wang, Xiaochang; Ngo, Huu Hao; Jin, Xin

    2015-12-01

    Two aeration modes, step aeration and point aeration, were used in a full-scale Carrousel oxidation ditch with microporous aeration. The nitrogen removal performance and mechanism were analyzed. With the same total aeration input, both aeration modes demonstrated good nitrification outcomes with the average efficiency in removing NH4(+)-N of more than 98%. However, the average removal efficiencies for total nitrogen were 89.3% and 77.6% under step aeration and point aeration, respectively. The results indicated that an extended aerobic zone followed the aeration zones could affect the proportion of anoxic and oxic zones. The step aeration with larger anoxic zones indicated better TN removal efficiency. More importantly, step aeration provided the suitable environment for both nitrifiers and denitrifiers. The diversity and relative abundance of denitrifying bacteria under the step aeration (1.55%) was higher than that under the point aeration (1.12%), which resulted in an overall higher TN removal efficiency. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Design space pruning heuristics and global optimization method for conceptual design of low-thrust asteroid tour missions

    NASA Astrophysics Data System (ADS)

    Alemany, Kristina

    Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times. Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables---launch date, time of flight, and asteroid stay times (when applicable)---as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition. This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times. The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.

  19. The properties of the lunar regolith at Chang'e-3 landing site: A study based on LPR data

    NASA Astrophysics Data System (ADS)

    Feng, J.; Su, Y.; Xing, S.; Ding, C.; Li, C.

    2015-12-01

    In situ sampling from surface is difficult in the exploration of planets and sometimes radar sensing is a better choice. The properties of the surface material such as permittivity, density and depth can be obtained by a surface penetrating radar. The Chang'e-3 (CE-3) landed in the northern Mare Imbrium and a Lunar Penetrating Radar (LPR) is carried on the Yutu rover to detect the shallow structure of the lunar crust and the properties of the lunar regolith, which will give us a close look at the lunar subsurface. We process the radar data in a way which consist two steps: the regular preprocessing step and migration step. The preprocessing part includes zero time correction, de-wow, gain compensation, DC removal, geometric positioning. Then we combine all radar data obtained at the time the rover was moving, and use FIR filter to reduce the noise in the radar image with a pass band frequency range 200MHz-600MHz. A normal radar image is obtained after the preprocessing step. Using a nonlinear least squares fitting method, we fit the most hyperbolas in the radar image which are caused by the buried objects or rocks in the regolith and estimate the EM wave propagation velocity and the permittivity of the regolith. For there is a fixed mathematical relationship between dielectric constant and density, the density profile of the lunar regolith is also calculated. It seems that the permittivity and density at the landing site is larger than we thought before. Finally with a model of variable velocities, we apply the Kirchhoff migration method widely used in the seismology to transform the the unfocused space-time LPR image to a focused one showing the object's (most are stones) true location and size. From the migrated image, we find that the regolith depth in the landing site is smaller than previous study and the stone content rises rapidly with depth. Our study suggests that the landing site is a young region and the reworked history of the surface is short, which is consistent with crater density, showing the gradual formation of regolith by rock fracture during impact events.

  20. DISPATCH: a numerical simulation framework for the exa-scale era - I. Fundamentals

    NASA Astrophysics Data System (ADS)

    Nordlund, Åke; Ramsey, Jon P.; Popovas, Andrius; Küffmeier, Michael

    2018-06-01

    We introduce a high-performance simulation framework that permits the semi-independent, task-based solution of sets of partial differential equations, typically manifesting as updates to a collection of `patches' in space-time. A hybrid MPI/OpenMP execution model is adopted, where work tasks are controlled by a rank-local `dispatcher' which selects, from a set of tasks generally much larger than the number of physical cores (or hardware threads), tasks that are ready for updating. The definition of a task can vary, for example, with some solving the equations of ideal magnetohydrodynamics (MHD), others non-ideal MHD, radiative transfer, or particle motion, and yet others applying particle-in-cell (PIC) methods. Tasks do not have to be grid based, while tasks that are, may use either Cartesian or orthogonal curvilinear meshes. Patches may be stationary or moving. Mesh refinement can be static or dynamic. A feature of decisive importance for the overall performance of the framework is that time-steps are determined and applied locally; this allows potentially large reductions in the total number of updates required in cases when the signal speed varies greatly across the computational domain, and therefore a corresponding reduction in computing time. Another feature is a load balancing algorithm that operates `locally' and aims to simultaneously minimize load and communication imbalance. The framework generally relies on already existing solvers, whose performance is augmented when run under the framework, due to more efficient cache usage, vectorization, local time-stepping, plus near-linear and, in principle, unlimited OpenMP and MPI scaling.

  1. Flash Pyrolysis and Fractional Pyrolysis of Oleaginous Biomass in a Fluidized-bed Reactor

    NASA Astrophysics Data System (ADS)

    Urban, Brook

    Thermochemical conversion methods such as pyrolysis have the potential for converting diverse biomass feedstocks into liquid fuels. In particular, bio-oil yields can be maximized by implementing flash pyrolysis to facilitate rapid heat transfer to the solids along with short vapor residence times to minimize secondary degradation of bio-oils. This study first focused on the design and construction of a fluidized-bed flash pyrolysis reactor with a high-efficiency bio-oil recovery unit. Subsequently, the reactor was used to perform flash pyrolysis of soybean pellets to assess the thermochemical conversion of oleaginous biomass feedstocks. The fluidized bed reactor design included a novel feed input mechanism through suction created by flow of carrier gas through a venturi which prevented plugging problems that occur with a more conventional screw feeders. In addition, the uniquely designed batch pyrolysis unit comprised of two tubes of dissimilar diameters. The bottom section consisted of a 1" tube and was connected to a larger 3" tube placed vertically above. At the carrier gas flow rates used in these studies, the feed particles remained fluidized in the smaller diameter tube, but a reduction in carrier gas velocity in the larger diameter "disengagement chamber" prevented the escape of particles into the condensers. The outlet of the reactor was connected to two Allihn condensers followed by an innovative packed-bed dry ice condenser. Due to the high carrier gas flow rates in fluidized bed reactors, bio-oil vapors form dilute aerosols upon cooling which that are difficult to coalesce and recover by traditional heat exchange condensers. The dry ice condenser provided high surface area for inertial impaction of these aerosols and also allowed easy recovery of bio-oils after natural evaporation of the dry ice at the end of the experiments. Single step pyrolysis was performed between 250-610°C with a vapor residence time between 0.3-0.6s. At 550°C or higher, 70% of the initial feed mass was recovered as bio-oil. However, the mass of high calorific lipid-derived components in the collected bio-oils remained nearly constant at reaction temperatures above 415°C; between 80-90% of the feedstock lipids were recovered in the bio-oil fraction. In addition, multi-step fractional flash pyrolysis experiments were performed to assess the possibility of producing higher quality bio-oils since a large fraction of protein and carbohydrates degrade at lower temperatures (320-400°C). A low temperature pyrolysis step was first performed and was followed by pyrolysis of the residues at higher temperature. This fractional pyrolysis approach which produced higher quality bio-oil with low water- and nitrogen- content from the higher temperature steps.

  2. Large scale crystallization of protein pharmaceuticals in microgravity via temperature change

    NASA Technical Reports Server (NTRS)

    Long, Marianna M.

    1992-01-01

    The major objective of this research effort is the temperature driven growth of protein crystals in large batches in the microgravity environment of space. Pharmaceutical houses are developing protein products for patient care, for example, human insulin, human growth hormone, interferons, and tissue plasminogen activator or TPA, the clot buster for heart attack victims. Except for insulin, these are very high value products; they are extremely potent in small quantities and have a great value per gram of material. It is feasible that microgravity crystallization can be a cost recoverable, economically sound final processing step in their manufacture. Large scale protein crystal growth in microgravity has significant advantages from the basic science and the applied science standpoints. Crystal growth can proceed unhindered due to lack of surface effects. Dynamic control is possible and relatively easy. The method has the potential to yield large quantities of pure crystalline product. Crystallization is a time honored procedure for purifying organic materials and microgravity crystallization could be the final step to remove trace impurities from high value protein pharmaceuticals. In addition, microgravity grown crystals could be the final formulation for those medicines that need to be administered in a timed release fashion. Long lasting insulin, insulin lente, is such a product. Also crystalline protein pharmaceuticals are more stable for long-term storage. Temperature, as the initiation step, has certain advantages. Again, dynamic control of the crystallization process is possible and easy. A temperature step is non-invasive and is the most subtle way to control protein solubility and therefore crystallization. Seeding is not necessary. Changes in protein and precipitant concentrations and pH are not necessary. Finally, this method represents a new way to crystallize proteins in space that takes advantage of the unique microgravity environment. The results from two flights showed that the hardware performed perfectly, many crystals were produced, and they were much larger than their ground grown controls. Morphometric analysis was done on over 4,000 crystals to establish crystal size, size distribution, and relative size. Space grown crystals were remarkably larger than their earth grown counterparts and crystal size was a function of PCF volume. That size distribution for the space grown crystals was a function of PCF volume may indicate that ultimate size was a function of temperature gradient. Since the insulin protein concentration was very low, 0.4 mg/ml, the size distribution could also be following the total amount of protein in each of the PCF's. X-ray analysis showed that the bigger space grown insulin crystals diffracted to higher resolution than their ground grown controls. When the data were normalized for size, they still indicated that the space crystals were better than the ground crystals.

  3. The effect of monocular and binocular viewing on the accommodation response to real targets in emmetropia and myopia.

    PubMed

    Seidel, Dirk; Gray, Lyle S; Heron, Gordon

    2005-04-01

    Decreased blur-sensitivity found in myopia has been linked with reduced accommodation responses and myopigenesis. Although the mechanism for myopia progression remains unclear, it is commonly known that myopic patients rarely report near visual symptoms and are generally very sensitive to small changes in their distance prescription. This experiment investigated the effect of monocular and binocular viewing on static and dynamic accommodation in emmetropes and myopes for real targets to monitor whether inaccuracies in the myopic accommodation response are maintained when a full set of visual cues, including size and disparity, is available. Monocular and binocular steady-state accommodation responses were measured with a Canon R1 autorefractor for target vergences ranging from 0-5 D in emmetropes (EMM), late-onset myopes (LOM), and early-onset myopes (EOM). Dynamic closed-loop accommodation responses for a stationary target at 0.25 m and step stimuli of two different magnitudes were recorded for both monocular and binocular viewing. All refractive groups showed similar accommodation stimulus response curves consistent with previously published data. Viewing a stationary near target monocularly, LOMs demonstrated slightly larger accommodation microfluctuations compared with EMMs and EOMs; however, this difference was absent under binocular viewing conditions. Dynamic accommodation step responses revealed significantly (p < 0.05) longer response times for the myopic subject groups for a number of step stimuli. No significant difference in either reaction time or the number of correct responses for a given number of step-vergence changes was found between the myopic groups and EMMs. When viewing real targets with size and disparity cues available, no significant differences in the accuracy of static and dynamic accommodation responses were found among EMM, EOM, and LOM. The results suggest that corrected myopes do not experience dioptric blur levels that are substantially different from emmetropes when they view free space targets.

  4. The role of cool-flame chemistry in quasi-steady combustion and extinction of n-heptane droplets

    NASA Astrophysics Data System (ADS)

    Paczko, Guenter; Peters, Norbert; Seshadri, Kalyanasundaram; Williams, Forman Arthur

    2014-09-01

    Experiments on the combustion of large n-heptane droplets, performed by the National Aeronautics and Space Administration in the International Space Station, revealed a second stage of continued quasi-steady burning, supported by low-temperature chemistry, that follows radiative extinction of the first stage of burning, which is supported by normal hot-flame chemistry. The second stage of combustion experienced diffusive extinction, after which a large vapour cloud was observed to form around the droplet. In the present work, a 770-step reduced chemical-kinetic mechanism and a new 62-step skeletal chemical-kinetic mechanism, developed as an extension of an earlier 56-step mechanism, are employed to calculate the droplet burning rates, flame structures, and extinction diameters for this cool-flame regime. The calculations are performed for quasi-steady burning with the mixture fraction as the independent variable, which is then related to the physical variables of droplet combustion. The predictions with the new mechanism, which agree well with measured autoignition times, reveal that, in decreasing order of abundance, H2O, CO, H2O2, CH2O, and C2H4 are the principal reaction products during the low-temperature stage and that, during this stage, there is substantial leakage of n-heptane and O2 through the flame, and very little production of CO2 with no soot in the mechanism. The fuel leakage has been suggested to be the source of the observed vapour cloud that forms after flame extinction. While the new skeletal chemical-kinetic mechanism facilitates understanding of the chemical kinetics and predicts ignition times well, its predicted droplet diameters at extinction are appreciably larger than observed experimentally, but predictions with the 770-step reduced chemical-kinetic mechanism are in reasonably good agreement with experiment. The computations show how the key ketohydroperoxide compounds control the diffusion-flame structure and its extinction.

  5. Method of Simulating Flow-Through Area of a Pressure Regulator

    NASA Technical Reports Server (NTRS)

    Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)

    2011-01-01

    The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.

  6. Effect of mixing, concentration and temperature on the formation of mesostructured solutions and their role in the nucleation of DL-valine crystals.

    PubMed

    Jawor-Baczynska, Anna; Moore, Barry D; Sefcik, Jan

    2015-01-01

    We report investigations on the formation of mesostructured solutions in DL-valine-water-2-propanol mixtures, and the crystallization of DL-valine from these solutions. Mesostructured liquid phases, similar to those previously observed in aqueous solutions of glycine and DL-alanine, were observed using Dynamic Light Scattering and Brownian microscopy, in both undersaturated and supersaturated solutions below a certain transition temperature. Careful experimentation was used to demonstrate that the optically clear mesostructured liquid phase, comprising colloidal mesoscale clusters dispersed within bulk solution, is thermodynamically stable and present in equilibrium with the solid phase at saturation conditions. Solutions prepared by slow cooling contained mesoscale clusters with a narrow size distribution and a mean hydrodynamic diameter of around 200 nm. Solutions of identical composition prepared by rapid isothermal mixing of valine aqueous solutions with 2-propanol contained mesoscale clusters which were significantly larger than those observed in slowly cooled solutions. The presence of larger mesoscale clusters was found to correspond to faster nucleation. Observed induction times were strongly dependent on the rapid initial mixing step, although solutions were left undisturbed afterwards and the induction times observed were up to two orders of magnitude longer than the initial mixing period. We propose that mesoscale clusters above a certain critical size are likely to be the location of productive nucleation events.

  7. Interplay of wavelength, fluence and spot-size in free-electron laser ablation of cornea.

    PubMed

    Hutson, M Shane; Ivanov, Borislav; Jayasinghe, Aroshan; Adunas, Gilma; Xiao, Yaowu; Guo, Mingsheng; Kozub, John

    2009-06-08

    Infrared free-electron lasers ablate tissue with high efficiency and low collateral damage when tuned to the 6-microm range. This wavelength-dependence has been hypothesized to arise from a multi-step process following differential absorption by tissue water and proteins. Here, we test this hypothesis at wavelengths for which cornea has matching overall absorption, but drastically different differential absorption. We measure etch depth, collateral damage and plume images and find that the hypothesis is not confirmed. We do find larger etch depths for larger spot sizes--an effect that can lead to an apparent wavelength dependence. Plume imaging at several wavelengths and spot sizes suggests that this effect is due to increased post-pulse ablation at larger spots.

  8. Older adults prioritize postural stability in the anterior-posterior direction to regain balance following volitional lateral step.

    PubMed

    Porter, Shaun; Nantel, Julie

    2015-02-01

    Postural control in the medial-lateral (ML) direction is of particular interest regarding the assessment of changes in postural control, as it is highly related to the risk of falling. To determine the postural strategies used to regain balance following a voluntary lateral step and compare these strategies between young and older adults. Sixteen older adults (60-90 years) and 14 young adults (20-40 years) were asked to stand quietly for 30s, walk in place and then take a lateral step and stand quietly (30s). Balance Post was divided into 10s intervals. Center of pressure displacement (CoP) and velocity (VCoP) in the anterio-posterior (AP) and ML directions were analyzed. In both groups, CoP and VCoP in AP and ML increased in Post1 compared to Pre (P<0.001). Dissimilar to young adults, VCoP-Post2, Post3 ML were larger than Pre (P=0.01) in older adults. Age correlated with all VCoP (Pre and Post) in both ML (P<0.05) and AP directions (P<0.01). Dissimilar to young adults, older adults use different postural strategies in ML and AP directions and prioritized postural stability in the AP direction to recover balance after completing a lateral step. In the ML direction, older adults took up to 30s to regain balance. Considering that age was related to larger CoP displacement and velocity, the AP strategy to recover postural balance following a lateral step could become less efficient as older adults age and therefore increasing the risk of falls. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Time-Variable Transit Time Distributions in the Hyporheic Zone of a Headwater Mountain Stream

    NASA Astrophysics Data System (ADS)

    Ward, Adam S.; Schmadel, Noah M.; Wondzell, Steven M.

    2018-03-01

    Exchange of water between streams and their hyporheic zones is known to be dynamic in response to hydrologic forcing, variable in space, and to exist in a framework with nested flow cells. The expected result of heterogeneous geomorphic setting, hydrologic forcing, and between-feature interaction is hyporheic transit times that are highly variable in both space and time. Transit time distributions (TTDs) are important as they reflect the potential for hyporheic processes to impact biogeochemical transformations and ecosystems. In this study we simulate time-variable transit time distributions based on dynamic vertical exchange in a headwater mountain stream with observed, heterogeneous step-pool morphology. Our simulations include hyporheic exchange over a 600 m river corridor reach driven by continuously observed, time-variable hydrologic conditions for more than 1 year. We found that spatial variability at an instance in time is typically larger than temporal variation for the reach. Furthermore, we found reach-scale TTDs were marginally variable under all but the most extreme hydrologic conditions, indicating that TTDs are highly transferable in time. Finally, we found that aggregation of annual variation in space and time into a "master TTD" reasonably represents most of the hydrologic dynamics simulated, suggesting that this aggregation approach may provide a relevant basis for scaling from features or short reaches to entire networks.

  10. Final Report for Department of Energy Project DE-SC0012198

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucchese, Robert; Poliakoff, Erwin; Trallero-Herrero, Carlos

    The study of the motion of atoms in molecules is important to understanding many areas of physical and life sciences. Such motion occurs on many different times scales, with electronic motion occurring on a sub-femtosecond time scale, and simple vibrational motion in tens to hundreds of femtoseconds. One way to follow such processes in real time is by the use of short-pulsed lasers, and in particular by studying time-resolved photoionization and the related process of high-harmonic generation (HHG). Thus there has been much effort to develop the tools necessary to probe molecular systems using short pulse lasers and understanding themore » sensitivity of the different possible probes to the time dependent geometric structure as well as the electronic structure of molecules. Our research has particularly focused on the connection between high-field processes and the more traditional weak field photoionization processes. Strong field and weak field processes can be connected through models that involve the same matrix elements. We have demonstrated in our study of HHG from SF6 that the spectrum is sensitive to the interplay between the angular dependence of the ionization step and recombination step. In our study of rescattering spectroscopy, we have shown that with a combination of experiment and theory, we can use this high-field spectroscopy to determine molecular structure in molecules such as C2H4. We have also developed new computational tools based on overset grids to enable studies on larger molecular systems which use much more robust numerical approaches so that the resulting code can be a tool that non-specialists can use to study related systems.« less

  11. Methods, systems and devices for detecting and locating ferromagnetic objects

    DOEpatents

    Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID

    2010-01-26

    Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.

  12. Pin-Hole Free Perovskite Film for Solar Cells Application Prepared by Controlled Two-Step Spin-Coating Method

    NASA Astrophysics Data System (ADS)

    Bahtiar, A.; Rahmanita, S.; Inayatie, Y. D.

    2017-05-01

    Morphology of perovskite film is a key important for achieving high performance perovskite solar cells. Perovskite films are commonly prepared by two-step spin-coating method. However, pin-holes are frequently formed in perovskite films due to incomplete conversion of lead-iodide (PbI2) into perovskite CH3NH3PbI3. Pin-holes in perovskite film cause large hysteresis in current-voltage curve of solar cells due to large series resistance between perovskite layer-hole transport material. Moreover, crystal structure and grain size of perovskite crystal are also other important parameters for achieving high performance solar cells, which are significantly affected by preparation of perovskite film. We studied the effect of preparation of perovskite film using controlled spin-coating parameters on crystal structure and morphological properties of perovskite film. We used two-step spin-coating method for preparation of perovskite film with varied spinning speed, spinning time and temperature of spin-coating process to control growth of perovskite crystal aimed to produce high quality perovskite crystal with pin-hole free and large grain size. All experiment was performed in air with high humidity (larger than 80%). The best crystal structure, pin-hole free with large grain crystal size of perovskite film was obtained from film prepared at room temperature with spinning speed 1000 rpm for 20 seconds and annealed at 100°C for 300 seconds.

  13. Angular distribution of Pigment epithelium central limit-Inner limit of the retina Minimal Distance (PIMD), in the young not pathological optic nerve head imaged by OCT

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Sandberg-Melin, Camilla

    2018-02-01

    The present study aimed to elucidate the angular distribution of the Pigment epithelium central limit-Inner limit of the retina Minimal Distance measured over 2π radians in the frontal plane (PIMD-2π) in young healthy eyes. Both healthy eyes of 16 subjects aged [20;30[ years were included. In each eye, a volume of the optical nerve head (ONH) was captured three times with a TOPCON DRI OCT Triton (Japan). Each volume renders a representation of the ONH 2.8 mm along the sagittal axis resolved in 993 steps, 6 mm long the frontal axis resolved in 512 steps and 6 x mm along the longitudinal axis resolved in 256 steps. The captured volumes were transferred to a custom made software for semiautomatic segmentation of PIMD around the circumference of the ONH. The phases of iterated volumes were calibrated with cross correlation. It was found that PIMD-2π expresses a double hump with a small maximum superiorly, a larger maximum inferiorly, and minima in between. The measurements indicated that there is no difference of PIMD-2π between genders nor between dominant and not dominant eye within subject. The variation between eyes within subject is of the same order as the variation among subjects. The variation among volumes within eye is substantially lower.

  14. Characteristics of silica rice husk ash from Mojogedang Karanganyar Indonesia

    NASA Astrophysics Data System (ADS)

    Suryana, R.; Iriani, Y.; Nurosyid, F.; Fasquelle, D.

    2018-05-01

    Indonesia is one of the countries in the world as the most abundant rice producer. Many researchers have demonstrated that the highest composition in the rice husk ash (RHA) is silica. Some of the advantages in utilizing silica as the raw material is the manufacture of ceramics, zeolite synthesis, fabrication of glass, electronic insulator materials, and as a catalyst. The amount of silica from rice husk ash is different for each region. Therefore, the study of silica from RHA is still promising, especially rice organic fertilizers. In this study, the rice came from Mojogedang Karanganyar Indonesia. Rice husk was dried under the solar radiation. Then the rice husk was heated in two steps: the first step at a temperature of 300°C and the second step at a temperature of 1200°C with a holding time at 2 h and 1 h, respectively. Furthermore, the temperature of the second step was varied at 1400 °C and 1600 °C. This heating process produced RHA. The content of RHA was observed on the EDAX spectrums while the morphology was observed from SEM images. The crystal structure of RHA was determined from XRD spectrums. The EDAX spectrums showed that RHA composition was dominated by elements Si and O for all the heating temperature. SEM images showed an agglomeration towards larger domains as heating temperatures increase. Analysis of XRD spectra is polycrystalline silica formed with the significant crystal orientation at 101, 102 and 200. The intensity of 101 increases significantly with increasing temperature. It is concluded that the crystal growth in the direction of 101 is preferred.

  15. Risk of falls in older people during fast-walking--the TASCOG study.

    PubMed

    Callisaya, M L; Blizzard, L; McGinley, J L; Srikanth, V K

    2012-07-01

    To investigate the relationship between fast-walking and falls in older people. Individuals aged 60-86 years were randomly selected from the electoral roll (n=176). Gait speed, step length, cadence and a walk ratio were recorded during preferred- and fast-walking using an instrumented walkway. Falls were recorded prospectively over 12 months. Log multinomial regression was used to estimate the relative risk of single and multiple falls associated with gait variables during fast-walking and change between preferred- and fast-walking. Covariates included age, sex, mood, physical activity, sensorimotor and cognitive measures. The risk of multiple falls was increased for those with a smaller walk ratio (shorter steps, faster cadence) during fast-walking (RR 0.92, CI 0.87, 0.97) and greater reduction in the walk ratio (smaller increase in step length, larger increase in cadence) when changing to fast-walking (RR 0.73, CI 0.63, 0.85). These gait patterns were associated with poorer physiological and cognitive function (p<0.05). A higher risk of multiple falls was also seen for those in the fastest quarter of gait speed (p=0.01) at fast-walking. A trend for better reaction time, balance, memory and physical activity for higher categories of gait speed was stronger for fallers than non-fallers (p<0.05). Tests of fast-walking may be useful in identifying older individuals at risk of multiple falls. There may be two distinct groups at risk--the frail person with short shuffling steps, and the healthy person exposed to greater risk. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Assessing dose-response effects of national essential medicine policy in China: comparison of two methods for handling data with a stepped wedge-like design and hierarchical structure.

    PubMed

    Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun

    2017-02-22

    To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose-response effect) for data from a stepped-wedge design with a hierarchical structure. The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Routinely and annually collected national data on China from 2008 to 2012. 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Agreement and differences in estimates of dose-response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). The estimated dose-response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2-4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose-response among provinces, counties and facilities were estimated, and the 'lowest' or 'highest' units by their dose-response effects were pinpointed only by the multilevel RM model. For examining dose-response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  17. People with stroke spend more time in active task practice, but similar time in walking practice, when physiotherapy rehabilitation is provided in circuit classes compared to individual therapy sessions: an observational study.

    PubMed

    English, Coralie; Hillier, Susan; Kaur, Gurpreet; Hundertmark, Laura

    2014-03-01

    Do people with stroke spend more time in active task practice during circuit class therapy sessions versus individual physiotherapy sessions? Do people with stroke practise different tasks during circuit class therapy sessions versus individual physiotherapy sessions? Prospective, observational study. Twenty-nine people with stroke in inpatient rehabilitation settings. Individual therapy sessions and circuit class therapy sessions provided within a larger randomised controlled trial. Seventy-nine therapy sessions were video-recorded and the footage was analysed for time spent engaged in various categories of activity. In a subsample of 28 videos, the number of steps taken by people with stroke per therapy session was counted. Circuit class therapy sessions were of a longer duration (mean difference 38.0minutes, 95% CI 29.9 to 46.1), and participants spent more time engaged in active task practice (mean difference 23.8minutes, 95% CI 16.1 to 31.4) compared with individual sessions. A greater percentage of time in circuit class therapy sessions was spent practising tasks in sitting (mean difference 5.3%, 95% CI 2.4 to 8.2) and in sit-to-stand practice (mean difference 2.7%, 95% CI 1.4 to 4.1), and a lower percentage of time in walking practice (mean difference 19.1%, 95% CI 10.0 to 28.1) compared with individual sessions. PARTICIPANTS took an average of 371 steps (SD 418) during therapy sessions and this did not differ significantly between group and individual sessions. People with stroke spent more time in active task practice, but a similar amount of time in walking practice when physiotherapy was offered in circuit class therapy sessions versus individual therapy sessions. There is a need for effective strategies to increase the amount of walking practice during physiotherapy sessions for people after stroke. Copyright © 2014 Australian Physiotherapy Association. Published by Elsevier B.V. All rights reserved.

  18. Effects of Thickness of a Low-Temperature Buffer and Impurity Incorporation on the Characteristics of Nitrogen-polar GaN.

    PubMed

    Yang, Fann-Wei; Chen, Yu-Yu; Feng, Shih-Wei; Sun, Qian; Han, Jung

    2016-12-01

    In this study, effects of the thickness of a low temperature (LT) buffer and impurity incorporation on the characteristics of Nitrogen (N)-polar GaN are investigated. By using either a nitridation or thermal annealing step before the deposition of a LT buffer, three N-polar GaN samples with different thicknesses of LT buffer and different impurity incorporations are prepared. It is found that the sample with the thinnest LT buffer and a nitridation step proves to be the best in terms of a fewer impurity incorporations, strong PL intensity, fast mobility, small biaxial strain, and smooth surface. As the temperature increases at ~10 K, the apparent donor-acceptor-pair band is responsible for the decreasing integral intensity of the band-to-band emission peak. In addition, the thermal annealing of the sapphire substrates may cause more impurity incorporation around the HT-GaN/LT-GaN/sapphire interfacial regions, which in turn may result in a lower carrier mobility, larger biaxial strain, larger bandgap shift, and stronger yellow luminescence. By using a nitridation step, both a thinner LT buffer and less impurity incorporation are beneficial to obtaining a high quality N-polar GaN.

  19. Space Debris and Observational Astronomy

    NASA Astrophysics Data System (ADS)

    Seitzer, Patrick

    2018-01-01

    Since the launch of Sputnik 1 in 1957, astronomers have faced an increasing number of artificial objects contaminating their images of the night sky. Currently almost 17000 objects larger than 10 cm are tracked and have current orbits in the public catalog. Active missions are only a small fraction of these objects. Most are inactive satellites, rocket bodies, and fragments of larger objects: all space debris. Several mega-constellations are planned which will increase this number by 20% or more in low Earth orbit (LEO). In terms of observational astronomy, this population of Earth orbiting objects has three implications: 1) the number of streaks and glints from spacecraft will only increase. There are some practical steps that can be taken to minimize the number of such streaks and glints in astronomical imaging data. 2) The risk to damage to orbiting astronomical telescopes will only increase, particularly those in LEO. 3) If you are working on a plan for an orbiting telescope project, then there are specific steps that must be taken to minimize space debris generation during the mission lifetime, and actions to safely dispose of the spacecraft at end of mission to prevent it from becoming space debris and a risk to other missions. These steps may involve sacrifices to mission performance and lifetime, but are essential in today's orbital environment.

  20. Relative and absolute reliability of the clinical version of the Narrow Path Walking Test (NPWT) under single and dual task conditions.

    PubMed

    Gimmon, Yoav; Jacob, Grinshpon; Lenoble-Hoskovec, Constanze; Büla, Christophe; Melzer, Itshak

    2013-01-01

    Decline in gait stability has been associated with increased fall risk in older adults. Reliable and clinically feasible methods of gait instability assessment are needed. This study evaluated the relative and absolute reliability and concurrent validity of the testing procedure of the clinical version of the Narrow Path Walking Test (NPWT) under single task (ST) and dual task (DT) conditions. Thirty independent community-dwelling older adults (65-87 years) were tested twice. Participants were instructed to walk within the 6-m narrow path without stepping out. Trial time, number of steps, trial velocity, number of step errors, and number of cognitive task errors were determined. Intraclass correlation coefficients (ICCs) were calculated as indices of agreement, and a graphic approach called "mountain plot" was applied to help interpret the direction and magnitude of disagreements between testing procedures. Smallest detectable change and smallest real difference (SRD) were computed to determine clinically relevant improvement at group and individual levels, respectively. Concurrent validity was assessed using Performance Oriented Mobility Assessment Tool (POMA) and the Short Physical Performance Battery (SPPB). Test-retest agreement (ICC1,2) varied from 0.77 to 0.92 in ST and from 0.78 to 0.92 in DT conditions, with no apparent systematic differences between testing procedures demonstrated by the mountain plot graphs. Smallest detectable change and smallest real change were small for motor task performance and larger for cognitive errors. Significant correlations were observed for trial velocity and trial time with POMA and SPPB. The present results indicate that the NPWT testing procedure is highly reliable and reproducible. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Advances in Landslide Nowcasting: Evaluation of a Global and Regional Modeling Approach

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia Bach; Peters-Lidard, Christa; Adler, Robert; Hong, Yang; Kumar, Sujay; Lerner-Lam, Arthur

    2011-01-01

    The increasing availability of remotely sensed data offers a new opportunity to address landslide hazard assessment at larger spatial scales. A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that may experience landslide activity. This system combines a calculation of static landslide susceptibility with satellite-derived rainfall estimates and uses a threshold approach to generate a set of nowcasts that classify potentially hazardous areas. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale near real-time landslide hazard assessment efforts, it requires several modifications before it can be fully realized as an operational tool. This study draws upon a prior work s recommendations to develop a new approach for considering landslide susceptibility and hazard at the regional scale. This case study calculates a regional susceptibility map using remotely sensed and in situ information and a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America. The susceptibility map is evaluated with a regional rainfall intensity duration triggering threshold and results are compared with the global algorithm framework for the same event. Evaluation of this regional system suggests that this empirically based approach provides one plausible way to approach some of the data and resolution issues identified in the global assessment. The presented methodology is straightforward to implement, improves upon the global approach, and allows for results to be transferable between regions. The results also highlight several remaining challenges, including the empirical nature of the algorithm framework and adequate information for algorithm validation. Conclusions suggest that integrating additional triggering factors such as soil moisture may help to improve algorithm performance accuracy. The regional algorithm scenario represents an important step forward in advancing regional and global-scale landslide hazard assessment.

  2. Implementing a virtual community of practice for family physician training: a mixed-methods case study.

    PubMed

    Barnett, Stephen; Jones, Sandra C; Caton, Tim; Iverson, Don; Bennett, Sue; Robinson, Laura

    2014-03-12

    GP training in Australia can be professionally isolating, with trainees spread across large geographic areas, leading to problems with rural workforce retention. Virtual communities of practice (VCoPs) may provide a way of improving knowledge sharing and thus reducing professional isolation. The goal of our study was to review the usefulness of a 7-step framework for implementing a VCoP for general practitioner (GP) training and then evaluated the usefulness of the resulting VCoP in facilitating knowledge sharing and reducing professional isolation. The case was set in an Australian general practice training region involving 55 first-term trainees (GPT1s), from January to July 2012. ConnectGPR was a secure, online community site that included standard community options such as discussion forums, blogs, newsletter broadcasts, webchats, and photo sharing. A mixed-methods case study methodology was used. Results are presented and interpreted for each step of the VCoP 7-step framework and then in terms of the outcomes of knowledge sharing and overcoming isolation. Step 1, Facilitation: Regular, personal facilitation by a group of GP trainers with a co-ordinating facilitator was an important factor in the success of ConnectGPR. Step 2, Champion and Support: Leadership and stakeholder engagement were vital. Further benefits are possible if the site is recognized as contributing to training time. Step 3, Clear Goals: Clear goals of facilitating knowledge sharing and improving connectedness helped to keep the site discussions focused. Step 4, A Broad Church: The ConnectGPR community was too narrow, focusing only on first-term trainees (GPT1s). Ideally there should be more involvement of senior trainees, trainers, and specialists. Step 5, A Supportive Environment: Facilitators maintained community standards and encouraged participation. Step 6, Measurement Benchmarking and Feedback: Site activity was primarily driven by centrally generated newsletter feedback. Viewing comments by other participants helped users benchmark their own knowledge, particularly around applying guidelines. Step 7, Technology and Community: All the community tools were useful, but chat was limited and users suggested webinars in future. A larger user base and more training may also be helpful. Time is a common barrier. Trust can be built online, which may have benefit for trainees that cannot attend face-to-face workshops. Knowledge sharing and isolation outcomes: 28/34 (82%) of the eligible GPT1s enrolled on ConnectGPR. Trainees shared knowledge through online chat, forums, and shared photos. In terms of knowledge needs, GPT1s rated their need for cardiovascular knowledge more highly than supervisors. Isolation was a common theme among interview respondents, and ConnectGPR users felt more supported in their general practice (13/14, 92.9%). The 7-step framework for implementation of an online community was useful. Overcoming isolation and improving connectedness through an online knowledge sharing community shows promise in GP training. Time and technology are barriers that may be overcome by training, technology, and valuable content. In a VCoP, trust can be built online. This has implications for course delivery, particularly in regional areas. VCoPs may also have a specific role assisting overseas trained doctors to interpret their medical knowledge in a new context.

  3. Shear Melting of a Colloidal Glass

    NASA Astrophysics Data System (ADS)

    Eisenmann, Christoph; Kim, Chanjoong; Mattsson, Johan; Weitz, David A.

    2010-01-01

    We use confocal microscopy to explore shear melting of colloidal glasses, which occurs at strains of ˜0.08, coinciding with a strongly non-Gaussian step size distribution. For larger strains, the particle mean square displacement increases linearly with strain and the step size distribution becomes Gaussian. The effective diffusion coefficient varies approximately linearly with shear rate, consistent with a modified Stokes-Einstein relationship in which thermal energy is replaced by shear energy and the length scale is set by the size of cooperatively moving regions consisting of ˜3 particles.

  4. Mississippi River nitrate loads from high frequency sensor measurements and regression-based load estimation

    USGS Publications Warehouse

    Pellerin, Brian A.; Bergamaschi, Brian A.; Gilliom, Robert J.; Crawford, Charles G.; Saraceno, John F.; Frederick, C. Paul; Downing, Bryan D.; Murphy, Jennifer C.

    2014-01-01

    Accurately quantifying nitrate (NO3–) loading from the Mississippi River is important for predicting summer hypoxia in the Gulf of Mexico and targeting nutrient reduction within the basin. Loads have historically been modeled with regression-based techniques, but recent advances with high frequency NO3– sensors allowed us to evaluate model performance relative to measured loads in the lower Mississippi River. Patterns in NO3– concentrations and loads were observed at daily to annual time steps, with considerable variability in concentration-discharge relationships over the two year study. Differences were particularly accentuated during the 2012 drought and 2013 flood, which resulted in anomalously high NO3– concentrations consistent with a large flush of stored NO3– from soil. The comparison between measured loads and modeled loads (LOADEST, Composite Method, WRTDS) showed underestimates of only 3.5% across the entire study period, but much larger differences at shorter time steps. Absolute differences in loads were typically greatest in the spring and early summer critical to Gulf hypoxia formation, with the largest differences (underestimates) for all models during the flood period of 2013. In additional to improving the accuracy and precision of monthly loads, high frequency NO3– measurements offer additional benefits not available with regression-based or other load estimation techniques.

  5. A Statistical Weather-Driven Streamflow Model: Enabling future flow predictions in data-scarce headwater streams

    NASA Astrophysics Data System (ADS)

    Rosner, A.; Letcher, B. H.; Vogel, R. M.

    2014-12-01

    Predicting streamflow in headwaters and over a broad spatial scale pose unique challenges due to limited data availability. Flow observation gages for headwaters streams are less common than for larger rivers, and gages with records lengths of ten year or more are even more scarce. Thus, there is a great need for estimating streamflows in ungaged or sparsely-gaged headwaters. Further, there is often insufficient basin information to develop rainfall-runoff models that could be used to predict future flows under various climate scenarios. Headwaters in the northeastern U.S. are of particular concern to aquatic biologists, as these stream serve as essential habitat for native coldwater fish. In order to understand fish response to past or future environmental drivers, estimates of seasonal streamflow are needed. While there is limited flow data, there is a wealth of data for historic weather conditions. Observed data has been modeled to interpolate a spatially continuous historic weather dataset. (Mauer et al 2002). We present a statistical model developed by pairing streamflow observations with precipitation and temperature information for the same and preceding time-steps. We demonstrate this model's use to predict flow metrics at the seasonal time-step. While not a physical model, this statistical model represents the weather drivers. Since this model can predict flows not directly tied to reference gages, we can generate flow estimates for historic as well as potential future conditions.

  6. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.

  7. Data assimilation of citizen collected information for real-time flood hazard mapping

    NASA Astrophysics Data System (ADS)

    Sayama, T.; Takara, K. T.

    2017-12-01

    Many studies in data assimilation in hydrology have focused on the integration of satellite remote sensing and in-situ monitoring data into hydrologic or land surface models. For flood predictions also, recent studies have demonstrated to assimilate remotely sensed inundation information with flood inundation models. In actual flood disaster situations, citizen collected information including local reports by residents and rescue teams and more recently tweets via social media also contain valuable information. The main interest of this study is how to effectively use such citizen collected information for real-time flood hazard mapping. Here we propose a new data assimilation technique based on pre-conducted ensemble inundation simulations and update inundation depth distributions sequentially when local data becomes available. The propose method is composed by the following two-steps. The first step is based on weighting average of preliminary ensemble simulations, whose weights are updated by Bayesian approach. The second step is based on an optimal interpolation, where the covariance matrix is calculated from the ensemble simulations. The proposed method was applied to case studies including an actual flood event occurred. It considers two situations with more idealized one by assuming continuous flood inundation depth information is available at multiple locations. The other one, which is more realistic case during such a severe flood disaster, assumes uncertain and non-continuous information is available to be assimilated. The results show that, in the first idealized situation, the large scale inundation during the flooding was estimated reasonably with RMSE < 0.4 m in average. For the second more realistic situation, the error becomes larger (RMSE 0.5 m) and the impact of the optimal interpolation becomes comparatively less effective. Nevertheless, the applications of the proposed data assimilation method demonstrated a high potential of this method for assimilating citizen collected information for real-time flood hazard mapping in the future.

  8. Brownian dynamics simulations with stiff finitely extensible nonlinear elastic-Fraenkel springs as approximations to rods in bead-rod models.

    PubMed

    Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G

    2006-01-28

    A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.

  9. Brownian dynamics simulations with stiff finitely extensible nonlinear elastic-Fraenkel springs as approximations to rods in bead-rod models

    NASA Astrophysics Data System (ADS)

    Hsieh, Chih-Chen; Jain, Semant; Larson, Ronald G.

    2006-01-01

    A very stiff finitely extensible nonlinear elastic (FENE)-Fraenkel spring is proposed to replace the rigid rod in the bead-rod model. This allows the adoption of a fast predictor-corrector method so that large time steps can be taken in Brownian dynamics (BD) simulations without over- or understretching the stiff springs. In contrast to the simple bead-rod model, BD simulations with beads and FENE-Fraenkel (FF) springs yield a random-walk configuration at equilibrium. We compare the simulation results of the free-draining bead-FF-spring model with those for the bead-rod model in relaxation, start-up of uniaxial extensional, and simple shear flows, and find that both methods generate nearly identical results. The computational cost per time step for a free-draining BD simulation with the proposed bead-FF-spring model is about twice as high as the traditional bead-rod model with the midpoint algorithm of Liu [J. Chem. Phys. 90, 5826 (1989)]. Nevertheless, computations with the bead-FF-spring model are as efficient as those with the bead-rod model in extensional flow because the former allows larger time steps. Moreover, the Brownian contribution to the stress for the bead-FF-spring model is isotropic and therefore simplifies the calculation of the polymer stresses. In addition, hydrodynamic interaction can more easily be incorporated into the bead-FF-spring model than into the bead-rod model since the metric force arising from the non-Cartesian coordinates used in bead-rod simulations is absent from bead-spring simulations. Finally, with our newly developed bead-FF-spring model, existing computer codes for the bead-spring models can trivially be converted to ones for effective bead-rod simulations merely by replacing the usual FENE or Cohen spring law with a FENE-Fraenkel law, and this convertibility provides a very convenient way to perform multiscale BD simulations.

  10. Comparison between different thickness umbrella-shaped expandable radiofrequency electrodes (SuperSlim and CoAccess): Experimental and clinical study

    PubMed Central

    KODA, MASAHIKO; TOKUNAGA, SHIHO; MATONO, TOMOMITSU; SUGIHARA, TAKAAKI; NAGAHARA, TAKAKAZU; MURAWAKI, YOSHIKAZU

    2011-01-01

    The purpose of the present study was to compare the size and configuration of the ablation zones created by SuperSlim and CoAccess electrodes, using various ablation algorithms in ex vivo bovine liver and in clinical cases. In the experimental study, we ablated explanted bovine liver using 2 types of electrodes and 4 ablation algorithms (combinations of incremental power supply, stepwise expansion and additional low-power ablation) and evaluated the ablation area and time. In the clinical study, we compared the ablation volume and the shape of the ablation zone between both electrodes in 23 hepatocellular carcinoma (HCC) cases with the best algorithm (incremental power supply, stepwise expansion and additional low-power ablation) as derived from the experimental study. In the experimental study, the ablation area and time by the CoAccess electrode were significantly greater compared to those by the SuperSlim electrode for the single-step (algorithm 1, p=0.0209 and 0.0325, respectively) and stepwise expansion algorithms (algorithm 2, p=0.0002 and <0.0001, respectively; algorithm 3, p= 0.006 and 0.0407, respectively). However, differences were not significant for the additional low-power ablation algorithm. In the clinical study, the ablation volume and time in the CoAccess group were significantly larger and longer, respectively, compared to those in the SuperSlim group (p=0.0242 and 0.009, respectively). Round ablation zones were acquired in 91.7% of the CoAccess group, while irregular ablation zones were obtained in 45.5% of the SuperSlim group (p=0.0428). In conclusion, the CoAccess electrode achieves larger and more uniform ablation zones compared with the SuperSlim electrode, though it requires longer ablation times in experimental and clinical studies. PMID:22977647

  11. Experimental analysis on the dynamic wake of an actuator disc undergoing transient loads

    NASA Astrophysics Data System (ADS)

    Yu, W.; Hong, V. W.; Ferreira, C.; van Kuik, G. A. M.

    2017-10-01

    The Blade Element Momentum model, which is based on the actuator disc theory, is still the model most used for the design of open rotors. Although derived from steady cases with a fully developed wake, this approach is also applied to unsteady cases, with additional engineering corrections. This work aims to study the impact of an unsteady loading on the wake of an actuator disc. The load and flow of an actuator disc are measured in the Open Jet Facility wind tunnel of Delft University of Technology, for steady and unsteady cases. The velocity and turbulence profiles are characterized in three regions: the inner wake region, the shear layer region and the region outside the wake. For unsteady load cases, the measured velocity field shows a hysteresis effect in relation to the loading, showing differences between the cases when loading is increased and loading is decreased. The flow field also shows a transient response to the step change in loading, with either an overshoot or undershoot of the velocity in relation to the steady-state velocity. In general, a smaller reduced ramp time results in a faster velocity transient, and in turn a larger amplitude of overshoot or undershoot. Time constants analysis shows that the flow reaches the new steady-state slower for load increase than for load decrease; the time constants outside the wake are generally larger than at other radial locations for a given downstream plane; the time constants of measured velocity in the wake show radial dependence.The data are relevant for the validation of numerical models for unsteady actuator discs and wind turbines, and are made available in an open source database (see Appendix).

  12. Twice electric field poling for engineering multiperiodic Hex-PPLN microstructures

    NASA Astrophysics Data System (ADS)

    Pagliarulo, Vito; Gennari, Oriella; Rega, Romina; Mecozzi, Laura; Grilli, Simonetta; Ferraro, Pietro

    2018-05-01

    Satellite bulk ferroelectric domains were observed everywhere around the larger main inverted ferroelectric domains when a Twice Electric Field Poling (TEFP) process is applied on a z-cut lithium niobate substrate. TEFP approach can be very advantageous for engineering multiperiodic poled microstructures in ferroelectrics. In fact, it is very difficult in the experimental practice to avoid underpoling and/or overpoling when structures with different sizes are requested in the same crystal. TEFP was applied to photoresist patterned crystal with 100 μm period and then a second EP step, with a ten-times smaller periodicity of 10 μm, was accomplished on the same sample. The intriguing fact is that the shorter 10 μm pattern disappeared everywhere except that around the larger satellite ferroelectric domains. The formation of this double-periodicity in the reversed ferroelectric domains occurs very easily and in repeatedly way. We have experimentally investigated the formation of such HePPLN structures by an interference microscopy in digital holography (DH) modality. The reported results demonstrate the possibility of fabricating multi-periodic structures and open the way to investigate the possibility to achieve hierarchical PPLN structures by multiple subsequent electric poling processes.

  13. Transfer effects of step training on stepping performance in untrained directions in older adults: A randomized controlled trial.

    PubMed

    Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina

    2017-05-01

    Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. When larger brains do not have more neurons: increased numbers of cells are compensated by decreased average cell size across mouse individuals

    PubMed Central

    Herculano-Houzel, Suzana; Messeder, Débora J.; Fonseca-Azevedo, Karina; Pantoja, Nilma A.

    2015-01-01

    There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease. PMID:26082686

  15. When larger brains do not have more neurons: increased numbers of cells are compensated by decreased average cell size across mouse individuals.

    PubMed

    Herculano-Houzel, Suzana; Messeder, Débora J; Fonseca-Azevedo, Karina; Pantoja, Nilma A

    2015-01-01

    There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease.

  16. Step-by-Step Technique for Segmental Reconstruction of Reverse Hill-Sachs Lesions Using Homologous Osteochondral Allograft.

    PubMed

    Alkaduhimi, Hassanin; van den Bekerom, Michel P J; van Deurzen, Derek F P

    2017-06-01

    Posterior shoulder dislocations are accompanied by high forces and can result in an anteromedial humeral head impression fracture called a reverse Hill-Sachs lesion. This reverse Hill-Sachs lesion can result in serious complications including posttraumatic osteoarthritis, posterior dislocations, osteonecrosis, persistent joint stiffness, and loss of shoulder function. Treatment is challenging and depends on the amount of bone loss. Several techniques have been reported to describe the surgical treatment of lesions larger than 20%. However, there is still limited evidence with regard to the optimal procedure. Favorable results have been reported by performing segmental reconstruction of the reverse Hill-Sachs lesion with bone allograft. Although the procedure of segmental reconstruction has been used in several studies, its technique has not yet been well described in detail. In this report we propose a step-by-step description of the technique how to perform a segmental reconstruction of a reverse Hill-Sachs defect.

  17. Microhard MHX2420 Orbital Performance Evaluation Using RT Logic T400CS

    NASA Technical Reports Server (NTRS)

    TintoreGazulla, Oriol; Lombardi, Mark

    2012-01-01

    RT Logic allows simulation of Ground Station - satellite communications: Static tests have been successful. Dynamic tests have been performed for simple passes. Future dynamic tests are needed to simulate real orbit communications. Satellite attitude changes antenna gain. Atmospheric and rain losses need to be added. STK Plug-in will be the next step to improve the dynamic tests. There is a possibility of running longer simulations. Simulation of different losses available in the STK Plug-in. Microhard optimization: Effect of Microhard settings on the data throughput have been understood. Optimized settings improve data throughput for LEO communications. Longer hop intervals make transfer of larger packets more efficient (more time between hops in frequency). Use of FEC (Reed-Solomon) reduces the number of retransmissions for long-range or noisy communications.

  18. 3D integrated superconducting qubits

    NASA Astrophysics Data System (ADS)

    Rosenberg, D.; Kim, D.; Das, R.; Yost, D.; Gustavsson, S.; Hover, D.; Krantz, P.; Melville, A.; Racz, L.; Samach, G. O.; Weber, S. J.; Yan, F.; Yoder, J. L.; Kerman, A. J.; Oliver, W. D.

    2017-10-01

    As the field of quantum computing advances from the few-qubit stage to larger-scale processors, qubit addressability and extensibility will necessitate the use of 3D integration and packaging. While 3D integration is well-developed for commercial electronics, relatively little work has been performed to determine its compatibility with high-coherence solid-state qubits. Of particular concern, qubit coherence times can be suppressed by the requisite processing steps and close proximity of another chip. In this work, we use a flip-chip process to bond a chip with superconducting flux qubits to another chip containing structures for qubit readout and control. We demonstrate that high qubit coherence (T1, T2,echo > 20 μs) is maintained in a flip-chip geometry in the presence of galvanic, capacitive, and inductive coupling between the chips.

  19. Effects of step length and step frequency on lower-limb muscle function in human gait.

    PubMed

    Lim, Yoong Ping; Lin, Yi-Chung; Pandy, Marcus G

    2017-05-24

    The aim of this study was to quantify the effects of step length and step frequency on lower-limb muscle function in walking. Three-dimensional gait data were used in conjunction with musculoskeletal modeling techniques to evaluate muscle function over a range of walking speeds using prescribed combinations of step length and step frequency. The body was modeled as a 10-segment, 21-degree-of-freedom skeleton actuated by 54 muscle-tendon units. Lower-limb muscle forces were calculated using inverse dynamics and static optimization. We found that five muscles - GMAX, GMED, VAS, GAS, and SOL - dominated vertical support and forward progression independent of changes made to either step length or step frequency, and that, overall, changes in step length had a greater influence on lower-limb joint motion, net joint moments and muscle function than step frequency. Peak forces developed by the uniarticular hip and knee extensors, as well as the normalized fiber lengths at which these muscles developed their peak forces, correlated more closely with changes in step length than step frequency. Increasing step length resulted in larger contributions from the hip and knee extensors and smaller contributions from gravitational forces (limb posture) to vertical support. These results provide insight into why older people with weak hip and knee extensors walk more slowly by reducing step length rather than step frequency and also help to identify the key muscle groups that ought to be targeted in exercise programs designed to improve gait biomechanics in older adults. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Inhibition of calcium oxalate monohydrate growth by citrate and the effect of the background electrolyte

    NASA Astrophysics Data System (ADS)

    Weaver, Matthew L.; Qiu, S. Roger; Hoyer, John R.; Casey, William H.; Nancollas, George H.; De Yoreo, James J.

    2007-08-01

    Pathological mineralization is a common phenomenon in broad range of plants and animals. In humans, kidney stone formation is a well-known example that afflicts approximately 10% of the population. Of the various calcium salt phases that comprise human kidney stones, the primary component is calcium oxalate monohydrate (COM). Citrate, a naturally occurring molecule in the urinary system and a common therapeutic agent for treating stone disease, is a known inhibitor of COM. Understanding the physical mechanisms of citrate inhibition requires quantification of the effects of both background electrolytes and citrate on COM step kinetics. Here we report the results of an in situ AFM study of these effects, in which we measure the effect of the electrolytes LiCl, NaCl, KCl, RbCl, and CsCl, and the dependence of step speed on citrate concentration for a range of COM supersaturations. We find that varying the background electrolyte results in significant differences in the measured step speeds and in step morphology, with KCl clearly producing the smallest impact and NaCl the largest. The kinetic coefficient for the former is nearly three times larger than for the latter, while the steps change from smooth to highly serrated when KCl is changed to NaCl. The results on the dependence of step speed on citrate concentration show that citrate produces a dead zone whose width increases with citrate concentration as well as a continual reduction in kinetic coefficient with increasing citrate level. We relate these results to a molecular-scale view of inhibition that invokes a combination of kink blocking and step pinning. Furthermore, we demonstrate that the classic step-pinning model of Cabrera and Vermilyea (C-V model) does an excellent job of predicting the effect of citrate on COM step kinetics provided the model is reformulated to more realistically account for impurity adsorption, include an expression for the Gibbs-Thomson effect that is correct for all supersaturations, and take into account a reduction in kinetic coefficient through kink blocking. The detailed derivation of this reformulated C-V model is presented and the underlying materials parameters that control its impact are examined. Despite the fact that the basic C-V model was proposed nearly 50 years ago and has seen extensive theoretical treatment, this study represents the first quantitative and molecular scale experimental confirmation for any crystal system.

  1. Metabolic cost and mechanical work for the step-to-step transition in walking after successful total ankle arthroplasty.

    PubMed

    Doets, H Cornelis; Vergouw, David; Veeger, H E J Dirkjan; Houdijk, Han

    2009-12-01

    The aim of this study was to investigate whether impaired ankle function after total ankle arthroplasty (TAA) affects the mechanical work during the step-to-step transition and the metabolic cost of walking. Respiratory and force plate data were recorded in 11 patients and 11 healthy controls while they walked barefoot at a fixed walking speed (FWS, 1.25 m/s) and at their self-selected speed (SWS). At FWS metabolic cost of transport was 28% higher for the TAA group, but at SWS there was no significant increase. During the step-to-step transition, positive mechanical work generated by the trailing TAA leg was lower and negative mechanical work in the leading intact leg was larger. Despite the increase in mechanical work dissipation during double support, no significant differences in total mechanical work were found over a complete stride. This might be a result of methodological limitations of calculating mechanical work. Nevertheless, mechanical work dissipated during the step-to-step transition at FWS correlated significantly with metabolic cost of transport: r=.540. It was concluded that patients after successful TAA still experienced an impaired lower leg function, which contributed to an increase in mechanical energy dissipation during the step-to-step transition, and to an increase in the metabolic demand of walking. 2009 Elsevier B.V. All rights reserved.

  2. 7 CFR 3052.520 - Major program determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Auditors § 3052.520 Major program determination. (a) General. The auditor shall use a risk-based approach... section shall be followed. (b) Step 1. (1) The auditor shall identify the larger Federal programs, which... providing loans significantly affects the number or size of Type A programs, the auditor shall consider this...

  3. Vocational Education: Options and Directions. Working Paper 18.

    ERIC Educational Resources Information Center

    Stokes, Helen; Holdsworth, Roger

    This paper presents practical options for school development in relation to vocational education ranging from small and specific steps to larger whole school change. Chapter 1 describes the context. Chapter 2 highlights three interlocked imperatives to attain this objective: a comprehensive and well-structured educational program that deals…

  4. 'Fishing' for Alternatives to Mountaintop Mining in Southern West Virginia

    EPA Science Inventory

    Mountaintop removal mining (MTR) is a major industry in southern West Virginia with many detrimental effects for small to mid-sized streams, and interest in alternative, sustainable industries is on the rise. As a first step in a larger effort to assess the value of sport fisheri...

  5. Stepping Stones to Leadership: Districts Forge a Clear Path for Aspiring Principals

    ERIC Educational Resources Information Center

    Burrows-McCabe, Amy

    2014-01-01

    This article describes a comprehensive strategy for developing a larger corps of effective principals, "The Principal Pipeline Initiative," launched by the Wallace Foundation in 2011. Its purpose is working to strengthen school leadership by documenting and evaluating leadership development in six urban districts (Charlotte-Mecklenburg…

  6. A Particle Module for the PLUTO Code. I. An Implementation of the MHD–PIC Equations

    NASA Astrophysics Data System (ADS)

    Mignone, A.; Bodo, G.; Vaidya, B.; Mattia, G.

    2018-05-01

    We describe an implementation of a particle physics module available for the PLUTO code appropriate for the dynamical evolution of a plasma consisting of a thermal fluid and a nonthermal component represented by relativistic charged particles or cosmic rays (CRs). While the fluid is approached using standard numerical schemes for magnetohydrodynamics, CR particles are treated kinetically using conventional Particle-In-Cell (PIC) techniques. The module can be used either to describe test-particle motion in the fluid electromagnetic field or to solve the fully coupled magnetohydrodynamics (MHD)–PIC system of equations with particle backreaction on the fluid as originally introduced by Bai et al. Particle backreaction on the fluid is included in the form of momentum–energy feedback and by introducing the CR-induced Hall term in Ohm’s law. The hybrid MHD–PIC module can be employed to study CR kinetic effects on scales larger than the (ion) skin depth provided that the Larmor gyration scale is properly resolved. When applicable, this formulation avoids resolving microscopic scales, offering substantial computational savings with respect to PIC simulations. We present a fully conservative formulation that is second-order accurate in time and space, and extends to either the Runge–Kutta (RK) or the corner transport upwind time-stepping schemes (for the fluid), while a standard Boris integrator is employed for the particles. For highly energetic relativistic CRs and in order to overcome the time-step restriction, a novel subcycling strategy that retains second-order accuracy in time is presented. Numerical benchmarks and applications including Bell instability, diffusive shock acceleration, and test-particle acceleration in reconnecting layers are discussed.

  7. The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.

    The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less

  8. One-step in situ synthesis of graphene–TiO{sub 2} nanorod hybrid composites with enhanced photocatalytic activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Mingxuan, E-mail: mingxuansun@sues.edu.cn; Li, Weibin; Sun, Shanfu

    2015-01-15

    Chemically bonded graphene/TiO{sub 2} nanorod hybrid composites with superior dispersity were synthesized by a one-step in situ hydrothermal method using graphene oxide (GO) and TiO{sub 2} (P25) as the starting materials. The as-prepared samples were characterized by XRD, XPS, TEM, FE-SEM, EDX, Raman, N{sub 2} adsorption, and UV–vis DRS techniques. Enhanced light absorption and a red shift of absorption edge were observed for the composites in the ultraviolet–visible diffuse reflectance spectroscopy (UV–vis DRS). Their effective photocatalytic activity was evaluated by the photodegradation of methylene blue under visible light irradiation. An enhancement of photocatalytic performance was observed over graphene/TiO{sub 2} nanorodmore » hybrid composite photocatalysts, as 3.7 times larger than that of pristine TiO{sub 2} nanorods. This work demonstrated that the synthesis of TiO{sub 2} nanorods and simultaneous conversion of GO to graphene “without using reducing agents” had shown to be a rapid, direct and clean approach to fabricate chemically bonded graphene/TiO{sub 2} nanorod hybrid composites with enhanced photocatalytic performance.« less

  9. Ionic Current Measurements in the Squid Giant Axon Membrane

    PubMed Central

    Cole, Kenneth S.; Moore, John W.

    1960-01-01

    The concepts, experiments, and interpretations of ionic current measurements after a step change of the squid axon membrane potential require the potential to be constant for the duration and the membrane area measured. An experimental approach to this ideal has been developed. Electrometer, operational, and control amplifiers produce the step potential between internal micropipette and external potential electrodes within 40 microseconds and a few millivolts. With an internal current electrode effective resistance of 2 ohm cm.2, the membrane potential and current may be constant within a few millivolts and 10 per cent out to near the electrode ends. The maximum membrane current patterns of the best axons are several times larger but of the type described by Cole and analyzed by Hodgkin and Huxley when the change of potential is adequately controlled. The occasional obvious distortions are attributed to the marginal adequacy of potential control to be expected from the characteristics of the current electrodes and the axon. Improvements are expected only to increase stability and accuracy. No reason has been found either to question the qualitative characteristics of the early measurements or to so discredit the analyses made of them. PMID:13694548

  10. Laboratory Formation of Fullerenes from PAHs: Top-down Interstellar Chemistry

    NASA Astrophysics Data System (ADS)

    Zhen, Junfeng; Castellanos, Pablo; Paardekooper, Daniel M.; Linnartz, Harold; Tielens, Alexander G. G. M.

    2014-12-01

    Interstellar molecules are thought to build up in the shielded environment of molecular clouds or in the envelope of evolved stars. This follows many sequential reaction steps of atoms and simple molecules in the gas phase and/or on (icy) grain surfaces. However, these chemical routes are highly inefficient for larger species in the tenuous environment of space as many steps are involved and, indeed, models fail to explain the observed high abundances. This is definitely the case for the C60 fullerene, recently identified as one of the most complex molecules in the interstellar medium. Observations have shown that, in some photodissociation regions, its abundance increases close to strong UV-sources. In this Letter we report laboratory findings in which C60 formation can be explained by characterizing the photochemical evolution of large polycyclic aromatic hydrocarbons (PAHs). Sequential H losses lead to fully dehydrogenated PAHs and subsequent losses of C2 units convert graphene into cages. Our results present for the first time experimental evidence that PAHs in excess of 60 C-atoms efficiently photo-isomerize to buckminsterfullerene, C60. These laboratory studies also attest to the importance of top-down synthesis routes for chemical complexity in space.

  11. Reinforcement of integrin-mediated T-Lymphocyte adhesion by TNF-induced Inside-out Signaling

    NASA Astrophysics Data System (ADS)

    Li, Qian; Huth, Steven; Adam, Dieter; Selhuber-Unkel, Christine

    2016-07-01

    Integrin-mediated leukocyte adhesion to endothelial cells is a crucial step in immunity against pathogens. Whereas the outside-in signaling pathway in response to the pro-inflammatory cytokine tumour necrosis factor (TNF) has already been studied in detail, little knowledge exists about a supposed TNF-mediated inside-out signaling pathway. In contrast to the outside-in signaling pathway, which relies on the TNF-induced upregulation of surface molecules on endothelium, inside-out signaling should also be present in an endothelium-free environment. Using single-cell force spectroscopy, we show here that stimulating Jurkat cells with TNF significantly reinforces their adhesion to fibronectin in a biomimetic in vitro assay for cell-surface contact times of about 1.5 seconds, whereas for larger contact times the effect disappears. Analysis of single-molecule ruptures further demonstrates that TNF strengthens sub-cellular single rupture events at short cell-surface contact times. Hence, our results provide quantitative evidence for the significant impact of TNF-induced inside-out signaling in the T-lymphocyte initial adhesion machinery.

  12. Precipitation pathways for ferrihydrite formation in acidic solutions

    DOE PAGES

    Zhu, Mengqiang; Khalid, Syed; Frandsen, Cathrine; ...

    2015-10-03

    In this study, iron oxides and oxyhydroxides form via Fe 3+ hydrolysis and polymerization in many aqueous environments, but the pathway from Fe 3+ monomers to oligomers and then to solid phase nuclei is unknown. In this work, using combined X-ray, UV–vis, and Mössbauer spectroscopic approaches, we were able to identify and quantify the long-time sought ferric speciation over time during ferric oxyhydroxide formation in partially-neutralized ferric nitrate solutions ([Fe 3+] = 0.2 M, 1.8 < pH < 3). Results demonstrate that Fe exists mainly as Fe(H 2O) 6 3+, μ-oxo aquo dimers and ferrihydrite, and that with time, themore » μ-oxo dimer decreases while the other two species increase in their concentrations. No larger Fe oligomers were detected. Given that the structure of the μ-oxo dimer is incompatible with those of all Fe oxides and oxyhydroxides, our results suggest that reconfiguration of the μ-oxo dimer structure occurs prior to further condensation leading up to the nucleation of ferrihydrite. The structural reconfiguration is likely the rate-limiting step involved in the nucleation process.« less

  13. Particle model of a cylindrical inductively coupled ion source

    NASA Astrophysics Data System (ADS)

    Ippolito, N. D.; Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.

    2017-08-01

    In spite of the wide use of RF sources, a complete understanding of the mechanisms regulating the RF-coupling of the plasma is still lacking so self-consistent simulations of the involved physics are highly desirable. For this reason we are developing a 2.5D fully kinetic Particle-In-Cell Monte-Carlo-Collision (PIC-MCC) model of a cylindrical ICP-RF source, keeping the time step of the simulation small enough to resolve the plasma frequency scale. The grid cell dimension is now about seven times larger than the average Debye length, because of the large computational demand of the code. It will be scaled down in the next phase of the development of the code. The filling gas is Xenon, in order to minimize the time lost by the MCC collision module in the first stage of development of the code. The results presented here are preliminary, with the code already showing a good robustness. The final goal will be the modeling of the NIO1 (Negative Ion Optimization phase 1) source, operating in Padua at Consorzio RFX.

  14. Differentiation between solid-ankle cushioned heel and energy storage and return prosthetic foot based on step-to-step transition cost.

    PubMed

    Wezenberg, Daphne; Cutti, Andrea G; Bruno, Antonino; Houdijk, Han

    2014-01-01

    Decreased push-off power by the prosthetic foot and inadequate roll-over shape of the foot have been shown to increase the energy dissipated during the step-to-step transition in human walking. The aim of this study was to determine whether energy storage and return (ESAR) feet are able to reduce the mechanical energy dissipated during the step-to-step transition. Fifteen males with a unilateral lower-limb amputation walked with their prescribed ESAR foot (Vari-Flex, Ossur; Reykjavik, Iceland) and with a solid-ankle cushioned heel foot (SACH) (1D10, Ottobock; Duderstadt, Germany), while ground reaction forces and kinematics were recorded. The positive mechanical work on the center of mass performed by the trailing prosthetic limb was larger (33%, p = 0.01) and the negative work performed by the leading intact limb was lower (13%, p = 0.04) when walking with the ESAR foot compared with the SACH foot. The reduced step-to-step transition cost coincided with a higher mechanical push-off power generated by the ESAR foot and an extended forward progression of the center of pressure under the prosthetic ESAR foot. Results can explain the proposed improvement in walking economy with this kind of energy storing and return prosthetic foot.

  15. Fluid flow and convective transport of solutes within the intervertebral disc.

    PubMed

    Ferguson, Stephen J; Ito, Keita; Nolte, Lutz P

    2004-02-01

    Previous experimental and analytical studies of solute transport in the intervertebral disc have demonstrated that for small molecules diffusive transport alone fulfils the nutritional needs of disc cells. It has been often suggested that fluid flow into and within the disc may enhance the transport of larger molecules. The goal of the study was to predict the influence of load-induced interstitial fluid flow on mass transport in the intervertebral disc. An iterative procedure was used to predict the convective transport of physiologically relevant molecules within the disc. An axisymmetric, poroelastic finite-element structural model of the disc was developed. The diurnal loading was divided into discrete time steps. At each time step, the fluid flow within the disc due to compression or swelling was calculated. A sequentially coupled diffusion/convection model was then employed to calculate solute transport, with a constant concentration of solute being provided at the vascularised endplates and outer annulus. Loading was simulated for a complete diurnal cycle, and the relative convective and diffusive transport was compared for solutes with molecular weights ranging from 400 Da to 40 kDa. Consistent with previous studies, fluid flow did not enhance the transport of low-weight solutes. During swelling, interstitial fluid flow increased the unidirectional penetration of large solutes by approximately 100%. Due to the bi-directional temporal nature of disc loading, however, the net effect of convective transport over a full diurnal cycle was more limited (30% increase). Further study is required to determine the significance of large solutes and the timing of their delivery for disc physiology.

  16. Three-Dimensional Reconstruction of Cloud-to-Ground Lightning Using High-Speed Video and VHF Broadband Interferometer

    NASA Astrophysics Data System (ADS)

    Li, Yun; Qiu, Shi; Shi, Lihua; Huang, Zhengyu; Wang, Tao; Duan, Yantao

    2017-12-01

    The time resolved three-dimensional (3-D) spatial reconstruction of lightning channels using high-speed video (HSV) images and VHF broadband interferometer (BITF) data is first presented in this paper. Because VHF and optical radiations in step formation process occur with time separation no more than 1 μs, the observation data of BITF and HSV at two different sites provide the possibility of reconstructing the time resolved 3-D channel of lightning. With the proposed procedures for 3-D reconstruction of leader channels, dart leaders as well as stepped leaders with complex multiple branches can be well reconstructed. The differences between 2-D speeds and 3-D speeds of leader channels are analyzed by comparing the development of leader channels in 2-D and 3-D space. Since return stroke (RS) usually follows the path of previous leader channels, the 3-D speeds of the return strokes are first estimated by combination with the 3-D structure of the preceding leaders and HSV image sequences. For the fourth RS, the ratios of the 3-D to 2-D RS speeds increase with height, and the largest ratio of the 3-D to 2-D return stroke speeds can reach 2.03, which is larger than the result of triggered lightning reported by Idone. Since BITF can detect lightning radiation in a 360° view, correlated BITF and HSV observations increase the 3-D detection probability than dual-station HSV observations, which is helpful to obtain more events and deeper understanding of the lightning process.

  17. CDGPS-Based Relative Navigation for Multiple Spacecraft

    NASA Technical Reports Server (NTRS)

    Mitchell, Megan Leigh

    2004-01-01

    This thesis investigates the use of Carrier-phase Differential GPS (CDGPS) in relative navigation filters for formation flying spacecraft. This work analyzes the relationship between the Extended Kalman Filter (EKF) design parameters and the resulting estimation accuracies, and in particular, the effect of the process and measurement noises on the semimajor axis error. This analysis clearly demonstrates that CDGPS-based relative navigation Kalman filters yield good estimation performance without satisfying the strong correlation property that previous work had associated with "good" navigation filters. Several examples are presented to show that the Kalman filter can be forced to create solutions with stronger correlations, but these always result in larger semimajor axis errors. These linear and nonlinear simulations also demonstrated the crucial role of the process noise in determining the semimajor axis knowledge. More sophisticated nonlinear models were included to reduce the propagation error in the estimator, but for long time steps and large separations, the EKF, which only uses a linearized covariance propagation, yielded very poor performance. In contrast, the CDGPS-based Unscented Kalman relative navigation Filter (UKF) handled the dynamic and measurement nonlinearities much better and yielded far superior performance than the EKF. The UKF produced good estimates for scenarios with long baselines and time steps for which the EKF would diverge rapidly. A hardware-in-the-loop testbed that is compatible with the Spirent Simulator at NASA GSFC was developed to provide a very flexible and robust capability for demonstrating CDGPS technologies in closed-loop. This extended previous work to implement the decentralized relative navigation algorithms in real time.

  18. Automated lithocell

    NASA Astrophysics Data System (ADS)

    Englisch, Andreas; Deuter, Armin

    1990-06-01

    Integration and automation have gained more and more ground in modern IC-manufacturing. It is difficult to make a direct calculation of the profit these investments yield. On the other hand, the demands to man, machine and technology have increased enormously of late; it is not difficult to see that only by means of integration and automation can these demands be coped with. Here are some salient points: U the complexity and costs incurred by the equipment and processes have got significantly higher . owing to the reduction of all dimensions, the tolerances within which the various process steps have to be carried out have got smaller and smaller and the adherence to these tolerances more and more difficult U the cycle time has become more and more important both for the development and control of new processes and, to a great extent, for a rapid and reliable supply to the customer. In order that the products be competitive under these conditions, all sort of costs have to be reduced and the yield has to be maximized. Therefore, the computer-aided control of the equipment and the process combined with an automatic data collection and a real-time SPC (statistical process control) has become absolutely necessary for successful IC-manufacturing. Human errors must be eliminated from the execution of the various process steps by automation. The work time set free in this way makes it possible for the human creativity to be employed on a larger scale in stabilizing the processes. Besides, a computer-aided equipment control can ensure the optimal utilization of the equipment round the clock.

  19. Transient kinetics of the rapid shape change of unstirred human blood platelets stimulated with ADP.

    PubMed Central

    Deranleau, D A; Dubler, D; Rothen, C; Lüscher, E F

    1982-01-01

    Unstirred (isotropic) suspensions of human blood platelets stimulated with ADP in a stopped-flow laser turbidimeter exhibit a distinct extinction maximum during the course of the classical rapid conversion of initially smooth flat discoid cells to smaller-body spiny spheres. This implies the existence of a transient intermediate having a larger average light scattering cross section (extinction coefficient) than either the disc or the spiny sphere. Monophasic extinction increases reaching the same final value were observed when either discoid or spiny sphere platelets were converted to smooth spheres by treatment with chlorpromazine, and sphering of discoid cells was accompanied by a larger total extinction change than the retraction of pseudopods by already spherical cells. These and other results suggest that the ADP-induced transient state represents platelets that are approximately as "spherical" as the irregular spiny sphere but lack the characteristic long pseudopods and as a consequence are larger bodied. Fitting the ADP progress curves to the series reaction A leads to B leads to C by means of the light scattering equivalent of the Beer-Lambert law yielded scattering cross sections that are consistent with this explanation. The rate constants for the two reaction steps were identical, indicating that ADP activation corresponds to a continuous random (Poisson) process with successive apparent states "disc," "sphere," and "spiny sphere," whose individual probabilities are determined by a single rate-limiting step. PMID:6961409

  20. Objective Assessment of Fall Risk in Parkinson's Disease Using a Body-Fixed Sensor Worn for 3 Days

    PubMed Central

    Weiss, Aner; Herman, Talia; Giladi, Nir; Hausdorff, Jeffrey M.

    2014-01-01

    Background Patients with Parkinson's disease (PD) suffer from a high fall risk. Previous approaches for evaluating fall risk are based on self-report or testing at a given time point and may, therefore, be insufficient to optimally capture fall risk. We tested, for the first time, whether metrics derived from 3 day continuous recordings are associated with fall risk in PD. Methods and Materials 107 patients (Hoehn & Yahr Stage: 2.6±0.7) wore a small, body-fixed sensor (3D accelerometer) on lower back for 3 days. Walking quantity (e.g., steps per 3-days) and quality (e.g., frequency-derived measures of gait variability) were determined. Subjects were classified as fallers or non-fallers based on fall history. Subjects were also followed for one year to evaluate predictors of the transition from non-faller to faller. Results The 3 day acceleration derived measures were significantly different in fallers and non-fallers and were significantly correlated with previously validated measures of fall risk. Walking quantity was similar in the two groups. In contrast, the fallers walked with higher step-to-step variability, e.g., anterior-posterior width of the dominant frequency was larger (p = 0.012) in the fallers (0.78±0.17 Hz) compared to the non-fallers (0.71±0.07 Hz). Among subjects who reported no falls in the year prior to testing, sensor-derived measures predicted the time to first fall (p = 0.0034), whereas many traditional measures did not. Cox regression analysis showed that anterior-posterior width was significantly (p = 0.0039) associated with time to fall during the follow-up period, even after adjusting for traditional measures. Conclusions/Significance These findings indicate that a body-fixed sensor worn continuously can evaluate fall risk in PD. This sensor-based approach was able to identify transition from non-faller to faller, whereas many traditional metrics were not successful. This approach may facilitate earlier detection of fall risk and may in the future, help reduce high costs associated with falls. PMID:24801889

  1. Parallel 3D Multi-Stage Simulation of a Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Turner, Mark G.; Topp, David A.

    1998-01-01

    A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.

  2. Feature Tracking for High Speed AFM Imaging of Biopolymers.

    PubMed

    Hartman, Brett; Andersson, Sean B

    2018-03-31

    The scanning speed of atomic force microscopes continues to advance with some current commercial microscopes achieving on the order of one frame per second and at least one reaching 10 frames per second. Despite the success of these instruments, even higher frame rates are needed with scan ranges larger than are currently achievable. Moreover, there is a significant installed base of slower instruments that would benefit from algorithmic approaches to increasing their frame rate without requiring significant hardware modifications. In this paper, we present an experimental demonstration of high speed scanning on an existing, non-high speed instrument, through the use of a feedback-based, feature-tracking algorithm that reduces imaging time by focusing on features of interest to reduce the total imaging area. Experiments on both circular and square gratings, as well as silicon steps and DNA strands show a reduction in imaging time by a factor of 3-12 over raster scanning, depending on the parameters chosen.

  3. A semi-automated Raman micro-spectroscopy method for morphological and chemical characterizations of microplastic litter.

    PubMed

    L, Frère; I, Paul-Pont; J, Moreau; P, Soudant; C, Lambert; A, Huvet; E, Rinnert

    2016-12-15

    Every step of microplastic analysis (collection, extraction and characterization) is time-consuming, representing an obstacle to the implementation of large scale monitoring. This study proposes a semi-automated Raman micro-spectroscopy method coupled to static image analysis that allows the screening of a large quantity of microplastic in a time-effective way with minimal machine operator intervention. The method was validated using 103 particles collected at the sea surface spiked with 7 standard plastics: morphological and chemical characterization of particles was performed in <3h. The method was then applied to a larger environmental sample (n=962 particles). The identification rate was 75% and significantly decreased as a function of particle size. Microplastics represented 71% of the identified particles and significant size differences were observed: polystyrene was mainly found in the 2-5mm range (59%), polyethylene in the 1-2mm range (40%) and polypropylene in the 0.335-1mm range (42%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Time-varying impedance of the human ankle in the sagittal and frontal planes during straight walk and turning steps.

    PubMed

    Ficanha, Evandro M; Ribeiro, Guilherme A; Knop, Lauren; Rastgaar, Mo

    2017-07-01

    This paper describes the methods and experiment protocols for estimation of the human ankle impedance during turning and straight line walking. The ankle impedance of two human subjects during the stance phase of walking in both dorsiflexion plantarflexion (DP) and inversion eversion (IE) were estimated. The impedance was estimated about 8 axes of rotations of the human ankle combining different amounts of DP and IE rotations, and differentiating among positive and negative rotations at 5 instants of the stance length (SL). Specifically, the impedance was estimated at 10%, 30%, 50%, 70% and 90% of the SL. The ankle impedance showed great variability across time, and across the axes of rotation, with consistent larger stiffness and damping in DP than IE. When comparing straight walking and turning, the main differences were in damping at 50%, 70%, and 90% of the SL with an increase in damping at all axes of rotation during turning.

  5. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles; Moin, Parviz

    2002-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustic combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Both of these characteristics suggest the use of larger time steps than those allowed by an acoustic CFL condition. The turbulent combustion model used is the Combined Conserved Scalar/Level Set Flamelet model of Duchamp de Lageneste and Pitsch for partially premixed combustion. Comparison of LES results to the experiments of Besson et al will be presented.

  6. Review of the potential of optical technologies for cancer diagnosis in neurosurgery: a step toward intraoperative neurophotonics

    PubMed Central

    Vasefi, Fartash; MacKinnon, Nicholas; Farkas, Daniel L.; Kateb, Babak

    2016-01-01

    Abstract. Advances in image-guided therapy enable physicians to obtain real-time information on neurological disorders such as brain tumors to improve resection accuracy. Image guidance data include the location, size, shape, type, and extent of tumors. Recent technological advances in neurophotonic engineering have enabled the development of techniques for minimally invasive neurosurgery. Incorporation of these methods in intraoperative imaging decreases surgical procedure time and allows neurosurgeons to find remaining or hidden tumor or epileptic lesions. This facilitates more complete resection and improved topology information for postsurgical therapy (i.e., radiation). We review the clinical application of recent advances in neurophotonic technologies including Raman spectroscopy, thermal imaging, optical coherence tomography, and fluorescence spectroscopy, highlighting the importance of these technologies in live intraoperative tissue mapping during neurosurgery. While these technologies need further validation in larger clinical trials, they show remarkable promise in their ability to help surgeons to better visualize the areas of abnormality and enable safe and successful removal of malignancies. PMID:28042588

  7. An electric generator using living Torpedo electric organs controlled by fluid pressure-based alternative nervous systems

    PubMed Central

    Tanaka, Yo; Funano, Shun-ichi; Nishizawa, Yohei; Kamamichi, Norihiro; Nishinaka, Masahiro; Kitamori, Takehiko

    2016-01-01

    Direct electric power generation using biological functions have become a research focus due to their low cost and cleanliness. Unlike major approaches using glucose fuels or microbial fuel cells (MFCs), we present a generation method with intrinsically high energy conversion efficiency and generation with arbitrary timing using living electric organs of Torpedo (electric rays) which are serially integrated electrocytes converting ATP into electric energy. We developed alternative nervous systems using fluid pressure to stimulate electrocytes by a neurotransmitter, acetylcholine (Ach), and demonstrated electric generation. Maximum voltage and current were 1.5 V and 0.64 mA, respectively, with a duration time of a few seconds. We also demonstrated energy accumulation in a capacitor. The current was far larger than that using general cells other than electrocytes (~pA level). The generation ability was confirmed against repetitive cycles and also after preservation for 1 day. This is the first step toward ATP-based energy harvesting devices. PMID:27241817

  8. An electric generator using living Torpedo electric organs controlled by fluid pressure-based alternative nervous systems

    NASA Astrophysics Data System (ADS)

    Tanaka, Yo; Funano, Shun-Ichi; Nishizawa, Yohei; Kamamichi, Norihiro; Nishinaka, Masahiro; Kitamori, Takehiko

    2016-05-01

    Direct electric power generation using biological functions have become a research focus due to their low cost and cleanliness. Unlike major approaches using glucose fuels or microbial fuel cells (MFCs), we present a generation method with intrinsically high energy conversion efficiency and generation with arbitrary timing using living electric organs of Torpedo (electric rays) which are serially integrated electrocytes converting ATP into electric energy. We developed alternative nervous systems using fluid pressure to stimulate electrocytes by a neurotransmitter, acetylcholine (Ach), and demonstrated electric generation. Maximum voltage and current were 1.5 V and 0.64 mA, respectively, with a duration time of a few seconds. We also demonstrated energy accumulation in a capacitor. The current was far larger than that using general cells other than electrocytes (~pA level). The generation ability was confirmed against repetitive cycles and also after preservation for 1 day. This is the first step toward ATP-based energy harvesting devices.

  9. Design of Magnetic Gelatine/Silica Nanocomposites by Nanoemulsification: Encapsulation versus in Situ Growth of Iron Oxide Colloids

    PubMed Central

    Allouche, Joachim; Chanéac, Corinne; Brayner, Roberta; Boissière, Michel; Coradin, Thibaud

    2014-01-01

    The design of magnetic nanoparticles by incorporation of iron oxide colloids within gelatine/silica hybrid nanoparticles has been performed for the first time through a nanoemulsion route using the encapsulation of pre-formed magnetite nanocrystals and the in situ precipitation of ferrous/ferric ions. The first method leads to bi-continuous hybrid nanocomposites containing a limited amount of well-dispersed magnetite colloids. In contrast, the second approach allows the formation of gelatine-silica core-shell nanostructures incorporating larger amounts of agglomerated iron oxide colloids. Both magnetic nanocomposites exhibit similar superparamagnetic behaviors. Whereas nanocomposites obtained via an in situ approach show a strong tendency to aggregate in solution, the encapsulation route allows further surface modification of the magnetic nanocomposites, leading to quaternary gold/iron oxide/silica/gelatine nanoparticles. Hence, such a first-time rational combination of nano-emulsion, nanocrystallization and sol-gel chemistry allows the elaboration of multi-component functional nanomaterials. This constitutes a step forward in the design of more complex bio-nanoplatforms. PMID:28344239

  10. Design of Magnetic Gelatine/Silica Nanocomposites by Nanoemulsification: Encapsulation versus in Situ Growth of Iron Oxide Colloids.

    PubMed

    Allouche, Joachim; Chanéac, Corinne; Brayner, Roberta; Boissière, Michel; Coradin, Thibaud

    2014-07-31

    The design of magnetic nanoparticles by incorporation of iron oxide colloids within gelatine/silica hybrid nanoparticles has been performed for the first time through a nanoemulsion route using the encapsulation of pre-formed magnetite nanocrystals and the in situ precipitation of ferrous/ferric ions. The first method leads to bi-continuous hybrid nanocomposites containing a limited amount of well-dispersed magnetite colloids. In contrast, the second approach allows the formation of gelatine-silica core-shell nanostructures incorporating larger amounts of agglomerated iron oxide colloids. Both magnetic nanocomposites exhibit similar superparamagnetic behaviors. Whereas nanocomposites obtained via an in situ approach show a strong tendency to aggregate in solution, the encapsulation route allows further surface modification of the magnetic nanocomposites, leading to quaternary gold/iron oxide/silica/gelatine nanoparticles. Hence, such a first-time rational combination of nano-emulsion, nanocrystallization and sol-gel chemistry allows the elaboration of multi-component functional nanomaterials. This constitutes a step forward in the design of more complex bio-nanoplatforms.

  11. Fast and Efficient Fragment-Based Lead Generation by Fully Automated Processing and Analysis of Ligand-Observed NMR Binding Data.

    PubMed

    Peng, Chen; Frommlet, Alexandra; Perez, Manuel; Cobas, Carlos; Blechschmidt, Anke; Dominguez, Santiago; Lingel, Andreas

    2016-04-14

    NMR binding assays are routinely applied in hit finding and validation during early stages of drug discovery, particularly for fragment-based lead generation. To this end, compound libraries are screened by ligand-observed NMR experiments such as STD, T1ρ, and CPMG to identify molecules interacting with a target. The analysis of a high number of complex spectra is performed largely manually and therefore represents a limiting step in hit generation campaigns. Here we report a novel integrated computational procedure that processes and analyzes ligand-observed proton and fluorine NMR binding data in a fully automated fashion. A performance evaluation comparing automated and manual analysis results on (19)F- and (1)H-detected data sets shows that the program delivers robust, high-confidence hit lists in a fraction of the time needed for manual analysis and greatly facilitates visual inspection of the associated NMR spectra. These features enable considerably higher throughput, the assessment of larger libraries, and shorter turn-around times.

  12. Calorimetric evidence for two distinct molecular packing arrangements in stable glasses of indomethacin.

    PubMed

    Kearns, Kenneth L; Swallen, Stephen F; Ediger, M D; Sun, Ye; Yu, Lian

    2009-02-12

    Indomethacin glasses of varying stabilities were prepared by physical vapor deposition onto substrates at 265 K. Enthalpy relaxation and the mobility onset temperature were assessed with differential scanning calorimetry (DSC). Quasi-isothermal temperature-modulated DSC was used to measure the reversing heat capacity during annealing above the glass transition temperature Tg. At deposition rates near 8 A/s, scanning DSC shows two enthalpy relaxation peaks and quasi-isothermal DSC shows a two-step change in the reversing heat capacity. We attribute these features to two distinct local packing structures in the vapor-deposited glass, and this interpretation is supported by the strong correlation between the two calorimetric signatures of the glass to liquid transformation. At lower deposition rates, a larger fraction of the sample is prepared in the more stable local packing. The transformation of the vapor-deposited glasses into the supercooled liquid above Tg is exceedingly slow, as much as 4500 times slower than the structural relaxation time of the liquid.

  13. Studies of numerical algorithms for gyrokinetics and the effects of shaping on plasma turbulence

    NASA Astrophysics Data System (ADS)

    Belli, Emily Ann

    Advanced numerical algorithms for gyrokinetic simulations are explored for more effective studies of plasma turbulent transport. The gyrokinetic equations describe the dynamics of particles in 5-dimensional phase space, averaging over the fast gyromotion, and provide a foundation for studying plasma microturbulence in fusion devices and in astrophysical plasmas. Several algorithms for Eulerian/continuum gyrokinetic solvers are compared. An iterative implicit scheme based on numerical approximations of the plasma response is developed. This method reduces the long time needed to set-up implicit arrays, yet still has larger time step advantages similar to a fully implicit method. Various model preconditioners and iteration schemes, including Krylov-based solvers, are explored. An Alternating Direction Implicit algorithm is also studied and is surprisingly found to yield a severe stability restriction on the time step. Overall, an iterative Krylov algorithm might be the best approach for extensions of core tokamak gyrokinetic simulations to edge kinetic formulations and may be particularly useful for studies of large-scale ExB shear effects. The effects of flux surface shape on the gyrokinetic stability and transport of tokamak plasmas are studied using the nonlinear GS2 gyrokinetic code with analytic equilibria based on interpolations of representative JET-like shapes. High shaping is found to be a stabilizing influence on both the linear ITG instability and nonlinear ITG turbulence. A scaling of the heat flux with elongation of chi ˜ kappa-1.5 or kappa-2 (depending on the triangularity) is observed, which is consistent with previous gyrofluid simulations. Thus, the GS2 turbulence simulations are explaining a significant fraction, but not all, of the empirical elongation scaling. The remainder of the scaling may come from (1) the edge boundary conditions for core turbulence, and (2) the larger Dimits nonlinear critical temperature gradient shift due to the enhancement of zonal flows with shaping, which is observed with the GS2 simulations. Finally, a local linear trial function-based gyrokinetic code is developed to aid in fast scoping studies of gyrokinetic linear stability. This code is successfully benchmarked with the full GS2 code in the collisionless, electrostatic limit, as well as in the more general electromagnetic description with higher-order Hermite basis functions.

  14. The Relaxation of Vicinal (001) with ZigZag [110] Steps

    NASA Astrophysics Data System (ADS)

    Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.

    2012-02-01

    This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.

  15. A system for accurate and automated injection of hyperpolarized substrate with minimal dead time and scalable volumes over a large range

    NASA Astrophysics Data System (ADS)

    Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N. J.

    2014-02-01

    Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3-4 s post-injection trigger signal and at 9-12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump.

  16. A matched filter method for ground-based sub-noise detection of terrestrial extrasolar planets in eclipsing binaries: application to CM Draconis.

    PubMed

    Jenkins, J M; Doyle, L R; Cullers, D K

    1996-02-01

    The photometric detection of extrasolar planets by transits in eclipsing binary systems can be significantly improved by cross-correlating the observational light curves with synthetic models of possible planetary transit features, essentially a matched filter approach. We demonstrate the utility and application of this transit detection algorithm for ground-based detections of terrestrial-sized (Earth-to-Neptune radii) extrasolar planets in the dwarf M-star eclipsing binary system CM Draconis. Preliminary photometric observational data of this system demonstrate that the observational noise is well characterized as white and Gaussian at the observational time steps required for precision photometric measurements. Depending on planet formation scenarios, terrestrial-sized planets may form quite close to this low-luminosity system. We demonstrate, for example, that planets as small as 1.4 Earth radii with periods on the order of a few months in the CM Draconis system could be detected at the 99.9% confidence level in less than a year using 1-m class telescopes from the ground. This result contradicts commonly held assumptions limiting present ground-based efforts to, at best, detections of gas giant planets after several years of observation. This method can be readily extended to a number of other larger star systems with the utilization of larger telescopes and longer observing times. Its extension to spacecraft observations should also allow the determination of the presence of terrestrial-sized planets in nearly 100 other known eclipsing binary systems.

  17. A matched filter method for ground-based sub-noise detection of terrestrial extrasolar planets in eclipsing binaries: application to CM Draconis

    NASA Technical Reports Server (NTRS)

    Jenkins, J. M.; Doyle, L. R.; Cullers, D. K.

    1996-01-01

    The photometric detection of extrasolar planets by transits in eclipsing binary systems can be significantly improved by cross-correlating the observational light curves with synthetic models of possible planetary transit features, essentially a matched filter approach. We demonstrate the utility and application of this transit detection algorithm for ground-based detections of terrestrial-sized (Earth-to-Neptune radii) extrasolar planets in the dwarf M-star eclipsing binary system CM Draconis. Preliminary photometric observational data of this system demonstrate that the observational noise is well characterized as white and Gaussian at the observational time steps required for precision photometric measurements. Depending on planet formation scenarios, terrestrial-sized planets may form quite close to this low-luminosity system. We demonstrate, for example, that planets as small as 1.4 Earth radii with periods on the order of a few months in the CM Draconis system could be detected at the 99.9% confidence level in less than a year using 1-m class telescopes from the ground. This result contradicts commonly held assumptions limiting present ground-based efforts to, at best, detections of gas giant planets after several years of observation. This method can be readily extended to a number of other larger star systems with the utilization of larger telescopes and longer observing times. Its extension to spacecraft observations should also allow the determination of the presence of terrestrial-sized planets in nearly 100 other known eclipsing binary systems.

  18. The effects of strength and power training on single-step balance recovery in older adults: a preliminary study.

    PubMed

    Pamukoff, Derek N; Haakonssen, Eric C; Zaccaria, Joseph A; Madigan, Michael L; Miller, Michael E; Marsh, Anthony P

    2014-01-01

    Improving muscle strength and power may mitigate the effects of sarcopenia, but it is unknown if this improves an older adult's ability to recover from a large postural perturbation. Forward tripping is prevalent in older adults and lateral falls are important due to risk of hip fracture. We used a forward and a lateral single-step balance recovery task to examine the effects of strength training (ST) or power (PT) training on single-step balance recovery in older adults. Twenty older adults (70.8±4.4 years, eleven male) were randomly assigned to either a 6-week (three times/week) lower extremity ST or PT intervention. Maximum forward (FLean(max)) and lateral (LLean(max)) lean angle and strength and power in knee extension and leg press were assessed at baseline and follow-up. Fifteen participants completed the study (ST =7, PT =8). Least squares means (95% CI) for ΔFLean(max) (ST: +4.1° [0.7, 7.5]; PT: +0.6° [-2.5, 3.8]) and ΔLLean(max) (ST: +2.2° [0.4, 4.1]; PT: +2.6° [0.9, 4.4]) indicated no differences between groups following training. In exploratory post hoc analyses collapsed by group, ΔFLean(max) was +2.4° (0.1, 4.7) and ΔLLean(max) was +2.4° (1.2, 3.6). These improvements on the balance recovery tasks ranged from ~15%-30%. The results of this preliminary study suggest that resistance training may improve balance recovery performance, and that, in this small sample, PT did not lead to larger improvements in single-step balance recovery compared to ST.

  19. Estimating heterotrophic respiration at large scales: challenges, approaches, and next steps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond-Lamberty, Benjamin; Epron, Daniel; Harden, Jennifer W.

    2016-06-27

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of "Decomposition Functional Types" (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelsmore » to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present and discuss an example clustering analysis to show how model-produced annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from already-existing PFTs. A similar analysis, incorporating observational data, could form a basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with high-performance computing; rigorous testing of analytical results; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR at large scales.« less

  20. Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps

    DOE PAGES

    Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer; ...

    2016-06-27

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of Decomposition Functional Types (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelersmore » and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. Lastly, these are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.« less

  1. Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps

    USGS Publications Warehouse

    Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer W.; Harmon, Mark E.; Hoffman, Forrest; Kumar, Jitendra; McGuire, Anthony David; Vargas, Rodrigo

    2016-01-01

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of “Decomposition Functional Types” (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelers and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.

  2. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    NASA Astrophysics Data System (ADS)

    Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.

    2013-05-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.

  3. Rotary drum separator system

    NASA Technical Reports Server (NTRS)

    Barone, Michael R. (Inventor); Murdoch, Karen (Inventor); Scull, Timothy D. (Inventor); Fort, James H. (Inventor)

    2009-01-01

    A rotary phase separator system generally includes a step-shaped rotary drum separator (RDS) and a motor assembly. The aspect ratio of the stepped drum minimizes power for both the accumulating and pumping functions. The accumulator section of the RDS has a relatively small diameter to minimize power losses within an axial length to define significant volume for accumulation. The pumping section of the RDS has a larger diameter to increase pumping head but has a shorter axial length to minimize power losses. The motor assembly drives the RDS at a low speed for separating and accumulating and a higher speed for pumping.

  4. Speeding Fermat's factoring method

    NASA Astrophysics Data System (ADS)

    McKee, James

    A factoring method is presented which, heuristically, splits composite n in O(n^{1/4+epsilon}) steps. There are two ideas: an integer approximation to sqrt(q/p) provides an O(n^{1/2+epsilon}) algorithm in which n is represented as the difference of two rational squares; observing that if a prime m divides a square, then m^2 divides that square, a heuristic speed-up to O(n^{1/4+epsilon}) steps is achieved. The method is well-suited for use with small computers: the storage required is negligible, and one never needs to work with numbers larger than n itself.

  5. Integrating social media and social marketing: a four-step process.

    PubMed

    Thackeray, Rosemary; Neiger, Brad L; Keller, Heidi

    2012-03-01

    Social media is a group of Internet-based applications that allows individuals to create, collaborate, and share content with one another. Practitioners can realize social media's untapped potential by incorporating it as part of the larger social marketing strategy, beyond promotion. Social media, if used correctly, may help organizations increase their capacity for putting the consumer at the center of the social marketing process. The purpose of this article is to provide a template for strategic thinking to successfully include social media as part of the social marketing strategy by using a four-step process.

  6. Small self-contained payload overview. [Space Shuttle Getaway Special project management

    NASA Technical Reports Server (NTRS)

    Miller, D. S.

    1981-01-01

    The low-cost Small Self-Contained Payload Program, also known as the Getaway Special, initiated by NASA for providing a stepping stone to larger scientific and manufacturing payloads, is presented. The steps of 'getting on board,' the conditions of use, the reimbursement policy and the procedures, and the flight scheduling mechanism for flying the Getaway Special payload are given. The terms and conditions, and the interfaces between NASA and the users for entering into an agreement with NASA for launch and associated services are described, as are the philosophy and the rationale for establishing the policy and the procedures.

  7. Accommodation modulates the individual difference between objective and subjective measures of the final convergence step response.

    PubMed

    Jainta, S; Hoormann, J; Jaschinski, W

    2009-03-01

    Measuring vergence eye movements with dichoptic nonius lines (subjectively) usually leads to an overestimation of the vergence state after a step response: a subjective vergence overestimation (SVO). We tried to reduce this SVO by presenting a vergence stimulus that decoupled vergence and accommodation during the step response, i.e. reduced the degree of 'forced vergence'. In a mirror-stereoscope, we estimated convergence step responses with nonius lines presented at 1000 ms after a disparity step-stimulus and compared it to objective recordings (EyeLink II; n = 6). We presented a vertical line, a cross/rectangle stimulus and a difference-of-gaussians (DOG) pattern. For 180 min arc step stimuli, the subjective measures revealed a larger final vergence response than the objective measure; for the vertical line this SVO was 20 min arc, while it was significantly smaller for the DOG (12 min arc). For 60 min arc step-responses, no overestimation was observed. Additionally, we measured accommodation, which changed more for the DOG-pattern compared with the line-stimulus; this relative increase correlated with the corresponding relative change of SVO (r = 0.77). Both findings (i.e. no overestimation for small steps and a weaker one for the DOG-pattern) reflect lesser conflicting demand on accommodation and vergence under 'forced-vergence' viewing; consequently, sensory compensation is reduced and subjective and objective measures of vergence step responses tend to agree.

  8. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  9. Time-dependent rheological behavior of natural polysaccharide xanthan gum solutions in interrupted shear and step-incremental/reductional shear flow fields

    NASA Astrophysics Data System (ADS)

    Lee, Ji-Seok; Song, Ki-Won

    2015-11-01

    The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.

  10. Eye-hand coordination during a double-step task: evidence for a common stochastic accumulator

    PubMed Central

    Gopal, Atul

    2015-01-01

    Many studies of reaching and pointing have shown significant spatial and temporal correlations between eye and hand movements. Nevertheless, it remains unclear whether these correlations are incidental, arising from common inputs (independent model); whether these correlations represent an interaction between otherwise independent eye and hand systems (interactive model); or whether these correlations arise from a single dedicated eye-hand system (common command model). Subjects were instructed to redirect gaze and pointing movements in a double-step task in an attempt to decouple eye-hand movements and causally distinguish between the three architectures. We used a drift-diffusion framework in the context of a race model, which has been previously used to explain redirect behavior for eye and hand movements separately, to predict the pattern of eye-hand decoupling. We found that the common command architecture could best explain the observed frequency of different eye and hand response patterns to the target step. A common stochastic accumulator for eye-hand coordination also predicts comparable variances, despite significant difference in the means of the eye and hand reaction time (RT) distributions, which we tested. Consistent with this prediction, we observed that the variances of the eye and hand RTs were similar, despite much larger hand RTs (∼90 ms). Moreover, changes in mean eye RTs, which also increased eye RT variance, produced a similar increase in mean and variance of the associated hand RT. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning. PMID:26084906

  11. Single-step direct fabrication of pillar-on-pore hybrid nanostructures in anodizing aluminum for superior superhydrophobic efficiency.

    PubMed

    Jeong, Chanyoung; Choi, Chang-Hwan

    2012-02-01

    Conventional electrochemical anodizing processes of metals such as aluminum typically produce planar and homogeneous nanopore structures. If hydrophobically treated, such 2D planar and interconnected pore structures typically result in lower contact angle and larger contact angle hysteresis than 3D disconnected pillar structures and, hence, exhibit inferior superhydrophobic efficiency. In this study, we demonstrate for the first time that the anodizing parameters can be engineered to design novel pillar-on-pore (POP) hybrid nanostructures directly in a simple one-step fabrication process so that superior surface superhydrophobicity can also be realized effectively from the electrochemical anodization process. On the basis of the characteristic of forming a self-ordered porous morphology in a hexagonal array, the modulation of anodizing voltage and duration enabled the formulation of the hybrid-type nanostructures having controlled pillar morphology on top of a porous layer in both mild and hard anodization modes. The hybrid nanostructures of the anodized metal oxide layer initially enhanced the surface hydrophilicity significantly (i.e., superhydrophilic). However, after a hydrophobic monolayer coating, such hybrid nanostructures then showed superior superhydrophobic nonwetting properties not attainable by the plain nanoporous surfaces produced by conventional anodization conditions. The well-regulated anodization process suggests that electrochemical anodizing can expand its usefulness and efficacy to render various metallic substrates with great superhydrophilicity or -hydrophobicity by directly realizing pillar-like structures on top of a self-ordered nanoporous array through a simple one-step fabrication procedure.

  12. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  13. Synoptic Sky Surveys: Lessons Learned and Challenges Ahead

    NASA Astrophysics Data System (ADS)

    Djorgovski, Stanislav G.; CRTS Team

    2014-01-01

    A new generation of synoptic sky surveys is now opening the time domain for a systematic exploration, presenting both great new scientific opportunities as well as the challenges. These surveys are touching essentially all subfields of astronomy, producing large statistical samples of the known types of objects and events (e.g., SNe, AGN, variable stars of many kinds), and have already uncovered previously unknown subtypes of these (e.g., rare or peculiar types of SNe). They are generating new science now, and paving the way for even larger surveys to come, e.g., the LSST. Our ability to fully exploit such forthcoming facilities depends critically on the science, methodology, and experience that are being accumulated now. Among the outstanding challenges the foremost is our ability to conduct an effective follow-up of the interesting events discovered by the surveys in any wavelength regime. The follow-up resources, especially spectroscopy, are already be severely limited, and this problem will grow by orders of magnitude. This requires an intelligent down-selection of the most astrophysically interesting events to follow. The first step in that process is an automated, real-time, iterative classification of transient events, that incorporates heterogeneous data from the surveys themselves, archival information (spatial, temporal, and multiwavelength), and the incoming follow-up observations. The second step is an optimal automated event prioritization and allocation of the available follow-up resources that also change in time. Both of these challenges are highly non-trivial, and require a strong cyber-infrastructure based on the Virtual Observatory data grid, and the various astroinformatics efforts now under way. This is inherently an astronomy of telescope-computational systems, that increasingly depends on novel machine learning and artificial intelligence tools. Another arena with a strong potential for discovery is an archival, non-time-critical exploration of the time domain, with the time dimension adding the complexity to an already challenging problem of data mining of highly-dimensional data parameter spaces.

  14. An adaptive time-stepping strategy for solving the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk

    2013-09-15

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less

  15. 29 CFR 99.520 - Major program determination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Auditors § 99.520 Major program determination. (a) General. The auditor shall use a risk-based approach to... followed. (b) Step 1. (1) The auditor shall identify the larger Federal programs, which shall be labeled... size of Type A programs, the auditor shall consider this Federal program as a Type A program and...

  16. Achievement Information Monitoring in Schools (AIMS): Larger Straws in the Winds of Change. Professional Paper 36.

    ERIC Educational Resources Information Center

    Follettie, Joseph F.

    The Southwest Regional Laboratory for Educational Research and Development (SWRL) is dedicated to the belief that individual differences among students do not stand in the way of universal quality instructional achievement in the nation's schools. Important steps towards the condition of universal instructionalized achievement are: (1) the…

  17. Island Ecology: An Exploration of Place in the Elementary Art Curriculum

    ERIC Educational Resources Information Center

    Hansen, Erica

    2009-01-01

    The environment is comprised of multiple dimensions, including natural, social, and built surroundings that people experience locally. Taken as a whole these local environs make up the larger ecological conditions experienced globally. Fostering a critical awareness of nature is the first step in supporting ecological or social change. Art…

  18. The minimally invasive approach to the symptomatic isthmocele - what does the literature say? A step-by-step primer on laparoscopic isthmocele - excision and repair.

    PubMed

    Sipahi, Sevgi; Sasaki, Kirsten; Miller, Charles E

    2017-08-01

    The purpose of this review is to understand the minimally invasive approach to the excision and repair of an isthmocele. Previous small trials and case reports have shown that the minimally invasive approach by hysteroscopy and/or laparoscopy can cure symptoms of a uterine isthmocele, including abnormal bleeding, pelvic pain and secondary infertility. A recent larger prospective study has been published that evaluates outcomes of minimally invasive isthmocele repair. Smaller studies and individual case reports echo the positive results of this larger trial. The cesarean section scar defect, also known as an isthmocele, has become an important diagnosis for women who present with abnormal uterine bleeding, pelvic pain and secondary infertility. It is important for providers to be aware of the effective surgical treatment options for the symptomatic isthmocele. A minimally invasive approach, whether it be laparoscopic or hysteroscopic, has proven to be a safe and effective option in reducing symptoms and improving fertility. VIDEO ABSTRACT: http://links.lww.com/COOG/A37.

  19. Scanning tunneling microscope study of GaAs(001) surfaces grown by migration enhanced epitaxy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J.; Gallagher, M.C.; Willis, R.F.

    We report an investigation of the morphology of p-type GaAs(001) surfaces using scanning tunneling microscopy (STM). The substrates were prepared using two methods: migration enhanced epitaxy (MEE) and standard molecular-beam epitaxy (MBE). The STM measurements were performed ex situ using As decapping. Analysis indicates that the overall step density of the MEE samples decreases as the growth temperature is increased. Nominally flat samples grown at 300{degrees}C exhibited step densities of 10.5 steps/1000 {Angstrom} along [ 110] dropping to 2.5 steps at 580{degrees}C. MEE samples exhibited a lower step density than MBE samples. However as-grown surfaces exhibited a larger distribution ofmore » step heights. Annealing the samples reduced the step height distribution exposing fewer atomic layers. Samples grown by MEE at 580{degrees}C and annealed for 2 min displayed the lowest step density and the narrowest step height distribution. All samples displayed an anisotropic step density. We found a ratio of A-type to B-type steps of between 2 and 3 which directly reflects the difference in the incorporation energy at steps. The aspect ratio increased slightly with growth temperature. We found a similar aspect ratio on samples grown by MBE. This indicates that anisotropic growth during MEE, like MBE, is dominated by incorporation kinetics. MEE samples grown at 580{degrees}C and capped immediately following growth exhibited a number of {open_quotes}holes{close_quotes} in the surface. The holes could be eliminated by annealing the surface prior to quenching. 20 refs., 3 figs., 1 tab.« less

  20. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.

    2009-09-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  1. Teleoperated Marsupial Mobile Sensor Platform Pair for Telepresence Insertion Into Challenging Structures

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.; Greer, Lawrence C.

    2011-01-01

    A platform has been developed for two or more vehicles with one or more residing within the other (a marsupial pair). This configuration consists of a large, versatile robot that is carrying a smaller, more specialized autonomous operating robot(s) and/or mobile repeaters for extended transmission. The larger vehicle, which is equipped with a ramp and/or a robotic arm, is used to operate over a more challenging topography than the smaller one(s) that may have a more limited inspection area to traverse. The intended use of this concept is to facilitate the insertion of a small video camera and sensor platform into a difficult entry area. In a terrestrial application, this may be a bus or a subway car with narrow aisles or steep stairs. The first field-tested configuration is a tracked vehicle bearing a rigid ramp of fixed length and width. A smaller six-wheeled vehicle approximately 10 in. (25 cm) wide by 12 in. (30 cm) long resides at the end of the ramp within the larger vehicle. The ramp extends from the larger vehicle and is tipped up into the air. Using video feedback from a camera atop the larger robot, the operator at a remote location can steer the larger vehicle to the bus door. Once positioned at the door, the operator can switch video feedback to a camera at the end of the ramp to facilitate the mating of the end of the ramp to the top landing at the upper terminus of the steps. The ramp can be lowered by remote control until its end is in contact with the top landing. At the same time, the end of the ramp bearing the smaller vehicle is raised to minimize the angle of the slope the smaller vehicle has to climb, and further gives the operator a better view of the entry to the bus from the smaller vehicle. Control is passed over to the smaller vehicle and, using video feedback from the camera, it is driven up the ramp, turned oblique into the bus, and then sent down the aisle for surveillance. The demonstrated vehicle was used to scale the steps leading to the interior of a bus whose landing is 44 in. (.1.1 m) from the road surface. This vehicle can position the end of its ramp to a surface over 50 in. (.1.3 m) above ground level and can drive over rail heights exceeding 6 in. (.15 cm). Thus configured, this vehicle can conceivably deliver the smaller robot to the end platform of New York City subway cars from between the rails. This innovation is scalable to other formulations for size, mobility, and surveillance functions. Conceivably the larger vehicle can be configured to traverse unstable rubble and debris to transport a smaller search and rescue vehicle as close as possible to the scene of a disaster such as a collapsed building. The smaller vehicle, tethered or otherwise, and capable of penetrating and traversing within the confined spaces in the collapsed structure, can transport imaging and other sensors to look for victims or other targets.

  2. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    PubMed Central

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  3. Pelvic step: the contribution of horizontal pelvis rotation to step length in young healthy adults walking on a treadmill.

    PubMed

    Liang, Bo Wei; Wu, Wen Hua; Meijer, Onno G; Lin, Jian Hua; Lv, Go Rong; Lin, Xiao Cong; Prins, Maarten R; Hu, Hai; van Dieën, Jaap H; Bruijn, Sjoerd M

    2014-01-01

    Transverse plane pelvis rotations during walking may be regarded as the "first determinant of gait". This would assume that pelvis rotations increase step length, and thereby reduce the vertical movements of the centre of mass-"the pelvic step". We analysed the pelvic step using 20 healthy young male subjects, walking on a treadmill at 1-5 km/h, with normal or big steps. Step length, pelvis rotation amplitude, leg-pelvis relative phase, and the contribution of pelvis rotation to step length were calculated. When speed increased in normal walking, pelvis rotation changed from more out-of-phase to in-phase with the upper leg. Consequently, the contribution of pelvis rotation to step length was negative at lower speeds, switching to positive at 3 km/h. With big steps, leg and pelvis were more in-phase, and the contribution of pelvis rotation to step length was always positive, and relatively large. Still, the overall contribution of pelvis rotations to step length was small, less than 3%. Regression analysis revealed that leg-pelvis relative phase predicted about 60% of the variance of this contribution. The results of the present study suggest that, during normal slow walking, pelvis rotations increase, rather than decrease, the vertical movements of the centre of mass. With large steps, this does not happen, because leg and pelvis are in-phase at all speeds. Finally, it has been suggested that patients with hip flexion limitation may use larger pelvis rotations to increase step length. This, however, may only work as long as the pelvis rotates in-phase with the leg. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Voluntary stepping behavior under single- and dual-task conditions in chronic stroke survivors: A comparison between the involved and uninvolved legs.

    PubMed

    Melzer, Itshak; Goldring, Melissa; Melzer, Yehudit; Green, Elad; Tzedek, Irit

    2010-12-01

    If balance is lost, quick step execution can prevent falls. Research has shown that speed of voluntary stepping was able to predict future falls in old adults. The aim of the study was to investigate voluntary stepping behavior, as well as to compare timing and leg push-off force-time relation parameters of involved and uninvolved legs in stroke survivors during single- and dual-task conditions. We also aimed to compare timing and leg push-off force-time relation parameters between stroke survivors and healthy individuals in both task conditions. Ten stroke survivors performed a voluntary step execution test with their involved and uninvolved legs under two conditions: while focusing only on the stepping task and while a separate attention-demanding task was performed simultaneously. Temporal parameters related to the step time were measured including the duration of the step initiation phase, the preparatory phase, the swing phase, and the total step time. In addition, force-time parameters representing the push-off power during stepping were calculated from ground reaction data and compared with 10 healthy controls. The involved legs of stroke survivors had a significantly slower stepping time than uninvolved legs due to increased swing phase duration during both single- and dual-task conditions. For dual compared to single task, the stepping time increased significantly due to a significant increase in the duration of step initiation. In general, the force time parameters were significantly different in both legs of stroke survivors as compared to healthy controls, with no significant effect of dual compared with single-task conditions in both groups. The inability of stroke survivors to swing the involved leg quickly may be the most significant factor contributing to the large number of falls to the paretic side. The results suggest that stroke survivors were unable to rapidly produce muscle force in fast actions. This may be the mechanism of delayed execution of a fast step when balance is lost, thus increasing the likelihood of falls in stroke survivors. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Quasi-four-body treatment of charge transfer in the collision of protons with atomic helium: I. Thomas related mechanisms

    NASA Astrophysics Data System (ADS)

    Safarzade, Zohre; Fathi, Reza; Shojaei Akbarabadi, Farideh; Bolorizadeh, Mohammad A.

    2018-04-01

    The scattering of a completely bare ion by atoms larger than hydrogen is at least a four-body interaction, and the charge transfer channel involves a two-step process. Amongst the two-step interactions of the high-velocity single charge transfer in an anion-atom collision, there is one whose amplitude demonstrates a peak in the angular distribution of the cross sections. This peak, the so-called Thomas peak, was predicted by Thomas in a two-step interaction, classically, which could also be described through three-body quantum mechanical models. This work discusses a four-body quantum treatment of the charge transfer in ion-atom collisions, where two-step interactions illustrating a Thomas peak are emphasized. In addition, the Pauli exclusion principle is taken into account for the initial and final states as well as the operators. It will be demonstrated that there is a momentum condition for each two-step interaction to occur in a single charge transfer channel, where new classical interactions lead to the Thomas mechanism.

  6. Selectivity Mechanism of the Nuclear Pore Complex Characterized by Single Cargo Tracking

    PubMed Central

    Lowe, Alan R.; Siegel, Jake J.; Kalab, Petr; Siu, Merek; Weis, Karsten; Liphardt, Jan T.

    2010-01-01

    The Nuclear Pore Complex (NPC) mediates all exchange between the cytoplasm and the nucleus. Small molecules can passively diffuse through the NPC, while larger cargos require transport receptors to translocate1. How the NPC facilitates the translocation of transport receptor/cargo complexes remains unclear. Here, we track single protein-functionalized Quantum Dot (QD) cargos as they translocate the NPC. Import proceeds by successive sub-steps comprising cargo capture, filtering and translocation, and release into the nucleus. The majority of QDs are rejected at one of these steps and return to the cytoplasm including very large cargos that abort at a size-selective barrier. Cargo movement in the central channel is subdiffusive and cargos that can bind more transport receptors diffuse more freely. Without Ran, cargos still explore the entire NPC, but have a markedly reduced probability of exit into the nucleus, suggesting that NPC entry and exit steps are not equivalent and that the pore is functionally asymmetric to importing cargos. The overall selectivity of the NPC appears to arise from the cumulative action of multiple reversible sub-steps and a final irreversible exit step. PMID:20811366

  7. By-Pass Diode Temperature Tests of a Solar Array Coupon under Space Thermal Environment Conditions

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth H.; Schneider, Todd A.; Vaughn, Jason A.; Hoang, Bao; Wong, Frankie; Wu, Gordon

    2016-01-01

    By-Pass diodes are a key design feature of solar arrays and system design must be robust against local heating, especially with implementation of larger solar cells. By-Pass diode testing was performed to aid thermal model development for use in future array designs that utilize larger cell sizes that result in higher string currents. Testing was performed on a 56-cell Advanced Triple Junction solar array coupon provided by SSL. Test conditions were vacuum with cold array backside using discrete by-pass diode current steps of 0.25 A ranging from 0 A to 2.0 A.

  8. The Effect of Core Stability Training on Functional Movement Patterns in Collegiate Athletes.

    PubMed

    Bagherian, Sajad; Ghasempoor, Khodayar; Rahnama, Nader; Wikstrom, Erik A

    2018-02-06

    Pre-participation examinations are the standard approach for assessing poor movement quality that would increase musculoskeletal injury risk. However, little is known about how core stability influences functional movement patterns. The primary purpose of this study was to determine the effect of an 8-week core stability program on functional movement patterns in collegiate athletes. The secondary purpose was to determine if the core stability training program would be more effective in those with worse movement quality (i.e. ≤14 baseline FMS score). Quasi-experimental design. Athletic Training Facility. One-hundred collegiate athletes. Functional movement patterns included the Functional Movement Screen (FMS), Lateral step down (LSD) and Y balance test (YBT) and were assessed before and after the 8-week program. Participants were placed into 1 of the 2 groups: intervention and control. The intervention group was required to complete a core stability training program that met 3 times per week for 8-week. Significant group x time interactions demonstrated improvements in FMS, LSD and YBT scores in the experimental group relative to the control group (p<0.001). Independent sample t-tests demonstrate that change scores were larger (greater improvement) for the FMS total score and Hurdle step (p<0.001) in athletes with worse movement quality. An 8-week core stability training program enhances functional movement patterns and dynamic postural control in collegiate athletes. The benefits are more pronounced in collegiate athletes with poor movement quality.

  9. An algorithm for generating modular hierarchical neural network classifiers: a step toward larger scale applications

    NASA Astrophysics Data System (ADS)

    Roverso, Davide

    2003-08-01

    Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.

  10. Leveraging Anderson Acceleration for improved convergence of iterative solutions to transport systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willert, Jeffrey; Taitano, William T.; Knoll, Dana

    In this note we demonstrate that using Anderson Acceleration (AA) in place of a standard Picard iteration can not only increase the convergence rate but also make the iteration more robust for two transport applications. We also compare the convergence acceleration provided by AA to that provided by moment-based acceleration methods. Additionally, we demonstrate that those two acceleration methods can be used together in a nested fashion. We begin by describing the AA algorithm. At this point, we will describe two application problems, one from neutronics and one from plasma physics, on which we will apply AA. We provide computationalmore » results which highlight the benefits of using AA, namely that we can compute solutions using fewer function evaluations, larger time-steps, and achieve a more robust iteration.« less

  11. An embedded formula of the Chebyshev collocation method for stiff problems

    NASA Astrophysics Data System (ADS)

    Piao, Xiangfan; Bu, Sunyoung; Kim, Dojin; Kim, Philsu

    2017-12-01

    In this study, we have developed an embedded formula of the Chebyshev collocation method for stiff problems, based on the zeros of the generalized Chebyshev polynomials. A new strategy for the embedded formula, using a pair of methods to estimate the local truncation error, as performed in traditional embedded Runge-Kutta schemes, is proposed. The method is performed in such a way that not only the stability region of the embedded formula can be widened, but by allowing the usage of larger time step sizes, the total computational costs can also be reduced. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have an 8th order convergence and it exhibits A-stability. Through several numerical experimental results, we have demonstrated that the proposed method is numerically more efficient, compared to several existing implicit methods.

  12. Motivational interventions in community hypertension screening.

    PubMed Central

    Stahl, S M; Lawrie, T; Neill, P; Kelley, C

    1977-01-01

    To evaluate different techniques intended to motivate community residents to have their blood pressures taken, five inner-city target areas with comparable, predominantly Black, populations were selected. A sample of about 200 households in each of four areas were subjected to different motivational interventions; in one of these four areas, households were approached in a series of four sequential steps. The fifth target area served as a control. Findings establish that home visits by community members trained to take blood pressure measurements (BPMs) in the home produces much larger yields of new (previously unknown) hypertensives than more passive techniques such as invitational letters and gift offers. Prior informational letters, including letters specifying time of visit, do not affect refusals or increase the yield. More "passive" motivational techniques yield a higher proportion of previously known hypertensives than the more "active" outreach efforts. PMID:848618

  13. Motivational interventions in community hypertension screening.

    PubMed

    Stahl, S M; Lawrie, T; Neill, P; Kelley, C

    1977-04-01

    To evaluate different techniques intended to motivate community residents to have their blood pressures taken, five inner-city target areas with comparable, predominantly Black, populations were selected. A sample of about 200 households in each of four areas were subjected to different motivational interventions; in one of these four areas, households were approached in a series of four sequential steps. The fifth target area served as a control. Findings establish that home visits by community members trained to take blood pressure measurements (BPMs) in the home produces much larger yields of new (previously unknown) hypertensives than more passive techniques such as invitational letters and gift offers. Prior informational letters, including letters specifying time of visit, do not affect refusals or increase the yield. More "passive" motivational techniques yield a higher proportion of previously known hypertensives than the more "active" outreach efforts.

  14. Observed and modeled patterns of covariability between low-level cloudiness and the structure of the trade-wind layer

    DOE PAGES

    Nuijens, Louise; Medeiros, Brian; Sandu, Irina; ...

    2015-11-06

    We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratificationmore » at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.« less

  15. Observed and modeled patterns of covariability between low-level cloudiness and the structure of the trade-wind layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuijens, Louise; Medeiros, Brian; Sandu, Irina

    We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratificationmore » at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.« less

  16. How to get cool in the heat: comparing analytic models of hot, cold, and cooling gas in haloes and galaxies with EAGLE

    NASA Astrophysics Data System (ADS)

    Stevens, Adam R. H.; Lagos, Claudia del P.; Contreras, Sergio; Croton, Darren J.; Padilla, Nelson D.; Schaller, Matthieu; Schaye, Joop; Theuns, Tom

    2017-05-01

    We use the hydrodynamic, cosmological EAGLE simulations to investigate how the hot gas in haloes condenses to form and grow galaxies. We select haloes from the simulations that are actively cooling and study the temperature, distribution and metallicity of their hot, cold and transitioning 'cooling' gas, placing these in the context of semi-analytic models. Our selection criteria lead us to focus on Milky Way-like haloes. We find that the hot-gas density profiles of the haloes form a progressively stronger core over time, the nature of which can be captured by a β profile that has a simple dependence on redshift. In contrast, the hot gas that will cool over a time-step is broadly consistent with a singular isothermal sphere. We find that cooling gas carries a few times the specific angular momentum of the halo and is offset in spin direction from the rest of the hot gas. The gas loses ˜60 per cent of its specific angular momentum during the cooling process, generally remaining greater than that of the halo, and it precesses to become aligned with the cold gas already in the disc. We find tentative evidence that angular-momentum losses are slightly larger when gas cools on to dispersion-supported galaxies. We show that an exponential surface density profile for gas arriving on a disc remains a reasonable approximation, but a cusp containing ˜20 per cent of the mass is always present, and disc scale radii are larger than predicted by a vanilla Fall & Efstathiou model. These scale radii are still closely correlated with the halo spin parameter, for which we suggest an updated prescription for galaxy formation models.

  17. Turtle: identifying frequent k-mers with cache-efficient algorithms.

    PubMed

    Roy, Rajat Shuvro; Bhattacharya, Debashish; Schliep, Alexander

    2014-07-15

    Counting the frequencies of k-mers in read libraries is often a first step in the analysis of high-throughput sequencing data. Infrequent k-mers are assumed to be a result of sequencing errors. The frequent k-mers constitute a reduced but error-free representation of the experiment, which can inform read error correction or serve as the input to de novo assembly methods. Ideally, the memory requirement for counting should be linear in the number of frequent k-mers and not in the, typically much larger, total number of k-mers in the read library. We present a novel method that balances time, space and accuracy requirements to efficiently extract frequent k-mers even for high-coverage libraries and large genomes such as human. Our method is designed to minimize cache misses in a cache-efficient manner by using a pattern-blocked Bloom filter to remove infrequent k-mers from consideration in combination with a novel sort-and-compact scheme, instead of a hash, for the actual counting. Although this increases theoretical complexity, the savings in cache misses reduce the empirical running times. A variant of method can resort to a counting Bloom filter for even larger savings in memory at the expense of false-negative rates in addition to the false-positive rates common to all Bloom filter-based approaches. A comparison with the state-of-the-art shows reduced memory requirements and running times. The tools are freely available for download at http://bioinformatics.rutgers.edu/Software/Turtle and http://figshare.com/articles/Turtle/791582. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik

    Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less

  19. A novel trapezoid fin pattern applicable for air-cooled heat sink

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Hung; Wang, Chi-Chuan

    2015-11-01

    The present study proposed a novel step or trapezoid surface design applicable to air-cooled heat sink under cross flow condition. A total of five heat sinks were made and tested, and the corresponding fin patterns are (a) plate fin; (b) step fin (step 1/3, 3 steps); (c) 2-step fin (step 1/2, 2 steps); (d) trapezoid fin (trap 1/3, cutting 1/3 length from the rear end) and (e) trapezoid fin (trap 1/2, cutting 1/2 length from the rear end). The design is based on the heat transfer augmentation via (1) longer perimeter of entrance region and (2) larger effective temperature difference at the rear part of the heat sink. From the test results, it is found that either step or trapezoid design can provide a higher heat transfer conductance and a lower pressure drop at a specified frontal velocity. The effective conductance of trap 1/3 design exceeds that of plate surface by approximately 38 % at a frontal velocity of 5 m s-1 while retains a lower pressure drop of 20 % with its surface area being reduced by 20.6 %. For comparisons exploiting the overall thermal resistance versus pumping power, the resultant thermal resistance of the proposed trapezoid design 1/3, still reveals a 10 % lower thermal resistance than the plate fin surface at a specified pumping power.

  20. The general alcoholics anonymous tools of recovery: the adoption of 12-step practices and beliefs.

    PubMed

    Greenfield, Brenna L; Tonigan, J Scott

    2013-09-01

    Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step work have received minimal attention and even less is known about how step work predicts later substance use. The current study (1) compared endorsements of step work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step work, the General Alcoholics Anonymous Tools of Recovery (GAATOR); (2) evaluated the underlying factor structure of the GAATOR and changes in step work over time; (3) examined changes in the endorsement of step work over time; and (4) investigated how, if at all, 12-step work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising behavioral step work and spiritual step work. Behavioral step work did not change over time, but was predicted by having a sponsor, while Spiritual step work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral step work did not prospectively predict substance use. In contrast, spiritual step work predicted percent days abstinent. Behavioral step work and spiritual step work appear to be conceptually distinct components of step work that have distinct predictors and unique impacts on outcomes. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  1. Microfluidic step-emulsification in a cylindrical geometry

    NASA Astrophysics Data System (ADS)

    Chakraborty, Indrajit; Leshansky, Alexander M.

    2016-11-01

    The model microfluidic device for high-throughput droplet generation in a confined cylindrical geometry is investigated numerically. The device comprises of core-annular pressure-driven flow of two immiscible viscous liquids through a cylindrical capillary connected co-axially to a tube of a larger diameter through a sudden expansion, mimicking the microfluidic step-emulsifier (1). To study this problem, the numerical simulations of axisymmetric Navier-Stokes equations have been carried out using an interface capturing procedure based on coupled level set and volume-of-fluid (CLSVOF) methods. The accuracy of the numerical method was favorably tested vs. the predictions of the linear stability analysis of core-annular two-phase flow in a cylindrical capillary. Three distinct flow regimes can be identified: the dripping (D) instability near the entrance to the capillary, the step- (S) and the balloon- (B) emulsification at the step-like expansion. Based on the simulation results we present the phase diagram quantifying transitions between various regimes in plane of the capillary number and the flow-rate ratio. MICROFLUSA EU H2020 project.

  2. Two-step chlorination: A new approach to disinfection of a primary sewage effluent.

    PubMed

    Li, Yu; Yang, Mengting; Zhang, Xiangru; Jiang, Jingyi; Liu, Jiaqi; Yau, Cie Fu; Graham, Nigel J D; Li, Xiaoyan

    2017-01-01

    Sewage disinfection aims at inactivating pathogenic microorganisms and preventing the transmission of waterborne diseases. Chlorination is extensively applied for disinfecting sewage effluents. The objective of achieving a disinfection goal and reducing disinfectant consumption and operational costs remains a challenge in sewage treatment. In this study, we have demonstrated that, for the same chlorine dosage, a two-step addition of chlorine (two-step chlorination) was significantly more efficient in disinfecting a primary sewage effluent than a one-step addition of chlorine (one-step chlorination), and shown how the two-step chlorination was optimized with respect to time interval and dosage ratio. Two-step chlorination of the sewage effluent attained its highest disinfection efficiency at a time interval of 19 s and a dosage ratio of 5:1. Compared to one-step chlorination, two-step chlorination enhanced the disinfection efficiency by up to 0.81- or even 1.02-log for two different chlorine doses and contact times. An empirical relationship involving disinfection efficiency, time interval and dosage ratio was obtained by best fitting. Mechanisms (including a higher overall Ct value, an intensive synergistic effect, and a shorter recovery time) were proposed for the higher disinfection efficiency of two-step chlorination in the sewage effluent disinfection. Annual chlorine consumption costs in one-step and two-step chlorination of the primary sewage effluent were estimated. Compared to one-step chlorination, two-step chlorination reduced the cost by up to 16.7%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  4. The International DORIS Service (IDS) - Recent Developments in Preparation for ITRF2013

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Lemoine, Frank G.; Moreaux, Guilhem; Soudarin, Laurent; Ferrage, Pascale; Ries, John; Otten, Michiel; Saunier, Jerome; Noll, Carey E.; Biancale, Richard; hide

    2014-01-01

    The International DORIS Service (IDS) was created in 2003 under the umbrella of the International Association of Geodesy (IAG) to foster scientific research related to the French DORIS tracking system and to deliver scientific products, mostly related to the International Earth rotation and Reference systems Service (IERS). We first present some general background related to the DORIS system (current and planned satellites, current tracking network and expected evolution) and to the general IDS organization (from Data Centers, Analysis Centers and Combination Center). Then, we discuss some of the steps recently taken to prepare the IDS submission to ITRF2013 (combined weekly time series based on individual solutions from several Analysis Centers). In particular, recent results obtained from the Analysis Centers and the Combination Center show that improvements can still be made when updating physical models of some DORIS satellites, such as Envisat, Cryosat-2 or Jason-2. The DORIS contribution to ITRF2013 should also benefit from the larger number of ground observations collected by the last generation of DGXX receivers (first instrument being onboard Jason-2 satellite). In particular for polar motion, sub-millarcsecond accuracy seems now to be achievable. Weekly station positioning internal consistency also seems to be improved with a larger DORIS constellation.

  5. Concepts and models of coupled systems

    NASA Astrophysics Data System (ADS)

    Ertsen, Maurits

    2017-04-01

    In this paper, I will especially focus on the question of the position of human agency, social networks and complex co-evolutionary interactions in socio-hydrological models. The long term perspective of complex systems' modeling typically focuses on regional or global spatial scales and century/millennium time scales. It is still a challenge to relate correlations in outcomes defined at those longer and larger scales to the causalities at the shorter and smaller scales. How do we move today to the next 1000 years in the same way that our ancestors did move from their today to our present, in the small steps that produce reality? Please note, I am not arguing long term work is not interesting or the like. I just pose the question how to deal with the problem that we employ relations with hindsight that matter to us, but not necessarily to the agents that produced the relations we think we have observed. I would like to push the socio-hydrological community a little into rethinking how to deal with complexity, with the aim to bring together the timescales of humans and complexity. I will provide one or two examples of how larger-scale and longer-term observations on water flows and environmental loads can be broken down into smaller-scale and shorter-term production processes of these same loads.

  6. Astrophysics in the Era of Massive Time-Domain Surveys

    NASA Astrophysics Data System (ADS)

    Djorgovski, G.

    Synoptic sky surveys are now the largest data producers in astronomy, entering the Petascale regime, opening the time domain for a systematic exploration. A great variety of interesting phenomena, spanning essentially all subfields of astronomy, can only be studied in the time domain, and these new surveys are producing large statistical samples of the known types of objects and events for further studies (e.g., SNe, AGN, variable stars of many kinds), and have already uncovered previously unknown subtypes of these (e.g., rare or peculiar types of SNe). These surveys are generating a new science, and paving the way for even larger surveys to come, e.g., the LSST; our ability to fully exploit such forthcoming facilities depends critically on the science, methodology, and experience that are being accumulated now. Among the outstanding challenges, the foremost is our ability to conduct an effective follow-up of the interesting events discovered by the surveys in any wavelength regime. The follow-up resources, especially spectroscopy, are already and, for the predictable future, will be severely limited, thus requiring an intelligent down-selection of the most astrophysically interesting events to follow. The first step in that process is an automated, real-time, iterative classification of events, that incorporates heterogeneous data from the surveys themselves, archival and contextual information (spatial, temporal, and multiwavelength), and the incoming follow-up observations. The second step is an optimal automated event prioritization and allocation of the available follow-up resources that also change in time. Both of these challenges are highly non-trivial, and require a strong cyber-infrastructure based on the Virtual Observatory data grid, and the various astroinformatics efforts. Time domain astronomy is inherently an astronomy of telescope-computational systems, and will increasingly depend on novel machine learning and artificial intelligence tools. Another arena with a strong potential for discovery is a purely archival, non-time-critical exploration of the time domain, with the time dimension adding the complexity to an already challenging problem of data mining of highly-dimensional parameter spaces produced by sky surveys.

  7. Application of data cubes for improving detection of water cycle extreme events

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Albayrak, A.

    2015-12-01

    As part of an ongoing NASA-funded project to remove a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series), for the hydrology and other point-time series-oriented communities, "data cubes" are created from which time series files (aka "data rods") are generated on-the-fly and made available as Web services from the Goddard Earth Sciences Data and Information Services Center (GES DISC). Data cubes are data as archived rearranged into spatio-temporal matrices, which allow for easy access to the data, both spatially and temporally. A data cube is a specific case of the general optimal strategy of reorganizing data to match the desired means of access. The gain from such reorganization is greater the larger the data set. As a use case for our project, we are leveraging existing software to explore the application of the data cubes concept to machine learning, for the purpose of detecting water cycle extreme (WCE) events, a specific case of anomaly detection, requiring time series data. We investigate the use of the sequential probability ratio test (SPRT) for anomaly detection and support vector machines (SVM) for anomaly classification. We show an example of detection of WCE events, using the Global Land Data Assimilation Systems (GLDAS) data set.

  8. "Flash" dance: how speed modulates percieved duration in dancers and non-dancers.

    PubMed

    Sgouramani, Helena; Vatakis, Argiro

    2014-03-01

    Speed has been proposed as a modulating factor on duration estimation. However, the different measurement methodologies and experimental designs used have led to inconsistent results across studies, and, thus, the issue of how speed modulates time estimation remains unresolved. Additionally, no studies have looked into the role of expertise on spatiotemporal tasks (tasks requiring high temporal and spatial acuity; e.g., dancing) and susceptibility to modulations of speed in timing judgments. In the present study, therefore, using naturalistic, dynamic dance stimuli, we aimed at defining the role of speed and the interaction of speed and experience on time estimation. We presented videos of a dancer performing identical ballet steps in fast and slow versions, while controlling for the number of changes present. Professional dancers and non-dancers performed duration judgments through a production and a reproduction task. Analysis revealed a significantly larger underestimation of fast videos as compared to slow ones during reproduction. The exact opposite result was true for the production task. Dancers were significantly less variable in their time estimations as compared to non-dancers. Speed and experience, therefore, affect the participants' estimates of time. Results are discussed in association to the theoretical framework of current models by focusing on the role of attention. © 2013 Elsevier B.V. All rights reserved.

  9. Application of Data Cubes for Improving Detection of Water Cycle Extreme Events

    NASA Technical Reports Server (NTRS)

    Albayrak, Arif; Teng, William

    2015-01-01

    As part of an ongoing NASA-funded project to remove a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series), for the hydrology and other point-time series-oriented communities, "data cubes" are created from which time series files (aka "data rods") are generated on-the-fly and made available as Web services from the Goddard Earth Sciences Data and Information Services Center (GES DISC). Data cubes are data as archived rearranged into spatio-temporal matrices, which allow for easy access to the data, both spatially and temporally. A data cube is a specific case of the general optimal strategy of reorganizing data to match the desired means of access. The gain from such reorganization is greater the larger the data set. As a use case of our project, we are leveraging existing software to explore the application of the data cubes concept to machine learning, for the purpose of detecting water cycle extreme events, a specific case of anomaly detection, requiring time series data. We investigate the use of support vector machines (SVM) for anomaly classification. We show an example of detection of water cycle extreme events, using data from the Tropical Rainfall Measuring Mission (TRMM).

  10. Family profile of victims of child abuse and neglect in the Kingdom of Saudi Arabia.

    PubMed

    Almuneef, Maha A; Alghamdi, Linah A; Saleheen, Hassan N

    2016-08-01

    To describe the family profile of child abuse and neglect (CAN) subjects in Saudi Arabia. Data were collected retrospectively between July 2009 and December 2013 from patients' files, which were obtained from the Child Protection Centre (CPC) based in King Abdulaziz Medical City, Riyadh, Saudi Arabia. Four main sets of variables were examined: demographics of victim, family profile, parental information, and information on perpetrator and forms of abuse.  The charts of 220 CAN cases were retrospectively reviewed. Physical abuse was the most common form of abuse (42%), followed by neglect (39%), sexual abuse (14%), and emotional abuse (4%). Children with unemployed fathers were 2.8 times as likely to experience physical abuse. Children living in single/step-parent households were 4 times as likely to experience physical abuse. Regarding neglect children living in larger households (≥6) were 1.5 times as likely to be neglected by their parents as were children living in smaller households (less than 6). Regarding sexual abuse, male children were 2.9 times as likely to be abused as were female children.  The recent acknowledgment of CAN as a public health problem in Saudi Arabia suggests that time will be needed to employ effective and culturally sensitive prevention strategies based on family risk factors.

  11. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  12. The prevalence of upright non-stepping time in comparison to stepping time in 11-13 year old school children across seasons.

    PubMed

    McCrorie, P Rw; Duncan, E; Granat, M H; Stansfield, B W

    2012-11-01

    Evidence suggests that behaviours such as standing are beneficial for our health. Unfortunately, little is known of the prevalence of this state, its importance in relation to time spent stepping or variation across seasons. The aim of this study was to quantify, in young adolescents, the prevalence and seasonal changes in time spent upright and not stepping (UNSt(time)) as well as time spent upright and stepping (USt(time)), and their contribution to overall upright time (U(time)). Thirty-three adolescents (12.2 ± 0.3 y) wore the activPAL activity monitor during four school days on two occasions: November/December (winter) and May/June (summer). UNSt(time) contributed 60% of daily U(time) at winter (Mean = 196 min) and 53% at summer (Mean = 171 min); a significant seasonal effect, p < 0.001. USt(time) was significantly greater in summer compared to winter (153 min versus 131 min, p < 0.001). The effects in UNSt(time) could be explained through significant seasonal differences during the school hours (09:00-16:00), whereas the effects in USt(time) could be explained through significant seasonal differences in the evening period (16:00-22:00). Adolescents spent a greater amount of time upright and not stepping than they did stepping, in both winter and summer. The observed seasonal effects for both UNSt(time) and USt(time) provide important information for behaviour change intervention programs.

  13. Technical note: Signal resolution increase and noise reduction in a CCD digitizer.

    PubMed

    González, A; Martínez, J A; Tobarra, B

    2004-03-01

    Increasing output resolution is assumed to improve noise characteristics of a CCD digitizer. In this work, however, we have found that as the quantization step becomes lower than the analog noise (present in the signal before its conversion to digital) the noise reduction becomes significantly lower than expected. That is the case for values of sigma(an)/delta larger than 0.6, where sigma(an) is the standard deviation of the analog noise and delta is the quantization step. The procedure is applied to a commercially available CCD digitizer, and noise reduction by means of signal resolution increase is compared to that obtained by low pass filtering.

  14. Brittle Creep of Tournemire Shale: Orientation, Temperature and Pressure Dependences

    NASA Astrophysics Data System (ADS)

    Geng, Zhi; Bonnelye, Audrey; Dick, Pierre; David, Christian; Chen, Mian; Schubnel, Alexandre

    2017-04-01

    Time and temperature dependent rock deformation has both scientific and socio-economic implications for natural hazards, the oil and gas industry and nuclear waste disposal. During the past decades, most studies on brittle creep have focused on igneous rocks and porous sedimentary rocks. To our knowledge, only few studies have been carried out on the brittle creep behavior of shale. Here, we conducted a series of creep experiments on shale specimens coming from the French Institute for Nuclear Safety (IRSN) underground research laboratory located in Tournemire, France. Conventional tri-axial experiments were carried under two different temperatures (26˚ C, 75˚ C) and confining pressures (10 MPa, 80 MPa), for three orientations (σ1 along, perpendicular and 45˚ to bedding). Following the methodology developed by Heap et al. [2008], differential stress was first increased to ˜ 60% of the short term peak strength (10-7/s, Bonnelye et al. 2016), and then in steps of 5 to 10 MPa every 24 hours until brittle failure was achieved. In these long-term experiments (approximately 10 days), stress and strains were recorded continuously, while ultrasonic acoustic velocities were recorded every 1˜15 minutes, enabling us to monitor the evolution of elastic wave speed anisotropy. Temporal evolution of anisotropy was illustrated by inverting acoustic velocities to Thomsen parameters. Finally, samples were investigated post-mortem using scanning electron microscopy. Our results seem to contradict our traditional understanding of loading rate dependent brittle failure. Indeed, the brittle creep failure stress of our Tournemire shale samples was systematically observed ˜50% higher than its short-term peak strength, with larger final axial strain accumulated. At higher temperatures, the creep failure strength of our samples was slightly reduced and deformation was characterized with faster 'steady-state' creep axial strain rates at each steps, and larger final axial strain accumulated. At each creep step, ultrasonic wave velocities first decreased, and then increased gradually. The magnitude of elastic wave velocity variations showed an important orientation and temperature dependence. Velocities measured perpendicular to bedding showed increased variation, variation that was enhanced at higher temperature and higher pressure. The case of complete elastic anisotropy reversal was even observed for sample deformed perpendicular to bedding, with a reduction amount of axial strain needed to reach anisotropy reversal at higher temperature. Our data were indicative of competition between crack growth, sealing/healing, and possibly mineral rotation or anisotropic compaction during creep. SEM investigation confirmed evidence of time dependent pressure solution and crack sealing/healing. Our research not only has practical engineering consequence but, more importantly, can provide valuable insights into the underlying mechanisms of creep in complex media like shale. In particular, our study highlights that the short-term peak strength has little meaning in shale material, which can over-consolidate importantly by 'plastic' flow. In addition, we showed that elastic anisotropy can switch and even reverse over relatively short time periods (<10 days) and for relatively small amount of plastic deformation (<5%).

  15. Evaluating ecological monitoring of civic environmental stewardship in the Green-Duwamish watershed, Washington

    Treesearch

    Jacob C. Sheppard; Clare M. Ryan; Dale J. Blahna

    2017-01-01

    The ecological outcomes of civic environmental stewardship are poorly understood, especially at scales larger than individual sites. In this study we characterized civic environmental stewardship programs in the Green-Duwamish watershed in King County, WA, and evaluated the extent to which stewardship outcomes were monitored. We developed a four-step process based on...

  16. 3-D Voxel FEM Simulation of Seismic Wave Propagation in a Land-Sea Structure with Topography

    NASA Astrophysics Data System (ADS)

    Ikegami, Y.; Koketsu, K.

    2003-12-01

    We have already developed the voxel FEM (finite element method) code to simulate seismic wave propagation in a land structure with surface topography (Koketsu, Fujiwara and Ikegami, 2003). Although the conventional FEM often requires much larger memory, longer computation time and farther complicated mesh generation than the Finite Difference Method (FDM), this code consumes a similar amount of memory to FDM and spends only 1.4 times longer computation time thanks to the simplicity of voxels (hexahedron elements). The voxel FEM was successfully applied to inland earthquakes, but most earthquakes in a subduction zone occur beneath a sea, so that a simulation in a land-sea structure should be essential for waveform modeling and strong motion prediction there. We now introduce a domain of fluid elements into the model and formulate displacements in the elements using the Lagrange method. Sea-bottom motions are simulated for the simple land-sea models of Okamoto and Takenaka (1999). The simulation results agree well with their reflectivity and FDM seismograms. In order to enhance numerical stability, not only a variable mesh but also an adaptive time step is introduced. We can now choose the optimal time steps everywhere in the model based the Courant condition. This doubly variable formulation may result in inefficient parallel computing. The wave velocity in a shallow part is lower than that in a deeper part. Therefore, if the model is divided into horizontal slices and they are assigned to CPUs, a shallow slice will consist of only small elements. This can cause unbalanced loads on the CPUs. Accordingly, the model is divided into vertical slices in this study. They also reduce inter-processor communication, because a vertical cross section is usually smaller than a horizontal one. In addition, we will consider higher-order FEM formulation compatible to the fourth-order FDM. We will also present numerical examples to demonstrate the effects of a sea and surface topography on seismic waves and ground motions.

  17. Immediate Effects of Clock-Turn Strategy on the Pattern and Performance of Narrow Turning in Persons With Parkinson Disease.

    PubMed

    Yang, Wen-Chieh; Hsu, Wei-Li; Wu, Ruey-Meei; Lin, Kwan-Hwa

    2016-10-01

    Turning difficulty is common in people with Parkinson disease (PD). The clock-turn strategy is a cognitive movement strategy to improve turning performance in people with PD despite its effects are unverified. Therefore, this study aimed to investigate the effects of the clock-turn strategy on the pattern of turning steps, turning performance, and freezing of gait during a narrow turning, and how these effects were influenced by concurrent performance of a cognitive task (dual task). Twenty-five people with PD were randomly assigned to the clock-turn or usual-turn group. Participants performed the Timed Up and Go test with and without concurrent cognitive task during the medication OFF period. The clock-turn group performed the Timed Up and Go test using the clock-turn strategy, whereas participants in the usual-turn group performed in their usual manner. Measurements were taken during the 180° turn of the Timed Up and Go test. The pattern of turning steps was evaluated by step time variability and step time asymmetry. Turning performance was evaluated by turning time and number of turning steps. The number and duration of freezing of gait were calculated by video review. The clock-turn group had lower step time variability and step time asymmetry than the usual-turn group. Furthermore, the clock-turn group turned faster with fewer freezing of gait episodes than the usual-turn group. Dual task increased the step time variability and step time asymmetry in both groups but did not affect turning performance and freezing severity. The clock-turn strategy reduces turning time and freezing of gait during turning, probably by lowering step time variability and asymmetry. Dual task compromises the effects of the clock-turn strategy, suggesting a competition for attentional resources.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A141).

  18. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  19. A new precipitation-based method of baseflow separation and event identification for small watersheds (<50 km2)

    NASA Astrophysics Data System (ADS)

    Koskelo, Antti I.; Fisher, Thomas R.; Utz, Ryan M.; Jordan, Thomas E.

    2012-07-01

    SummaryBaseflow separation methods are often impractical, require expensive materials and time-consuming methods, and/or are not designed for individual events in small watersheds. To provide a simple baseflow separation method for small watersheds, we describe a new precipitation-based technique known as the Sliding Average with Rain Record (SARR). The SARR uses rainfall data to justify each separation of the hydrograph. SARR has several advantages such as: it shows better consistency with the precipitation and discharge records, it is easier and more practical to implement, and it includes a method of event identification based on precipitation and quickflow response. SARR was derived from the United Kingdom Institute of Hydrology (UKIH) method with several key modifications to adapt it for small watersheds (<50 km2). We tested SARR on watersheds in the Choptank Basin on the Delmarva Peninsula (US Mid-Atlantic region) and compared the results with the UKIH method at the annual scale and the hydrochemical method at the individual event scale. Annually, SARR calculated a baseflow index that was ˜10% higher than the UKIH method due to the finer time step of SARR (1 d) compared to UKIH (5 d). At the watershed scale, hydric soils were an important driver of the annual baseflow index likely due to increased groundwater retention in hydric areas. At the event scale, SARR calculated less baseflow than the hydrochemical method, again because of the differences in time step (hourly for hydrochemical) and different definitions of baseflow. Both SARR and hydrochemical baseflow increased with event size, suggesting that baseflow contributions are more important during larger storms. To make SARR easy to implement, we have written a MatLab program to automate the calculations which requires only daily rainfall and daily flow data as inputs.

  20. Bias and inference from misspecified mixed‐effect models in stepped wedge trial analysis

    PubMed Central

    Fielding, Katherine L.; Davey, Calum; Aiken, Alexander M.; Hargreaves, James R.; Hayes, Richard J.

    2017-01-01

    Many stepped wedge trials (SWTs) are analysed by using a mixed‐effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common‐to‐all or varied‐between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within‐cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within‐cluster comparisons in the standard model. In the SWTs simulated here, mixed‐effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within‐cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28556355

  1. How do you design randomised trials for smaller populations? A framework.

    PubMed

    Parmar, Mahesh K B; Sydes, Matthew R; Morris, Tim P

    2016-11-25

    How should we approach trial design when we can get some, but not all, of the way to the numbers required for a randomised phase III trial?We present an ordered framework for designing randomised trials to address the problem when the ideal sample size is considered larger than the number of participants that can be recruited in a reasonable time frame. Staying with the frequentist approach that is well accepted and understood in large trials, we propose a framework that includes small alterations to the design parameters. These aim to increase the numbers achievable and also potentially reduce the sample size target. The first step should always be to attempt to extend collaborations, consider broadening eligibility criteria and increase the accrual time or follow-up time. The second set of ordered considerations are the choice of research arm, outcome measures, power and target effect. If the revised design is still not feasible, in the third step we propose moving from two- to one-sided significance tests, changing the type I error rate, using covariate information at the design stage, re-randomising patients and borrowing external information.We discuss the benefits of some of these possible changes and warn against others. We illustrate, with a worked example based on the Euramos-1 trial, the application of this framework in designing a trial that is feasible, while still providing a good evidence base to evaluate a research treatment.This framework would allow appropriate evaluation of treatments when large-scale phase III trials are not possible, but where the need for high-quality randomised data is as pressing as it is for common diseases.

  2. Three-dimensional inverse modelling of damped elastic wave propagation in the Fourier domain

    NASA Astrophysics Data System (ADS)

    Petrov, Petr V.; Newman, Gregory A.

    2014-09-01

    3-D full waveform inversion (FWI) of seismic wavefields is routinely implemented with explicit time-stepping simulators. A clear advantage of explicit time stepping is the avoidance of solving large-scale implicit linear systems that arise with frequency domain formulations. However, FWI using explicit time stepping may require a very fine time step and (as a consequence) significant computational resources and run times. If the computational challenges of wavefield simulation can be effectively handled, an FWI scheme implemented within the frequency domain utilizing only a few frequencies, offers a cost effective alternative to FWI in the time domain. We have therefore implemented a 3-D FWI scheme for elastic wave propagation in the Fourier domain. To overcome the computational bottleneck in wavefield simulation, we have exploited an efficient Krylov iterative solver for the elastic wave equations approximated with second and fourth order finite differences. The solver does not exploit multilevel preconditioning for wavefield simulation, but is coupled efficiently to the inversion iteration workflow to reduce computational cost. The workflow is best described as a series of sequential inversion experiments, where in the case of seismic reflection acquisition geometries, the data has been laddered such that we first image highly damped data, followed by data where damping is systemically reduced. The key to our modelling approach is its ability to take advantage of solver efficiency when the elastic wavefields are damped. As the inversion experiment progresses, damping is significantly reduced, effectively simulating non-damped wavefields in the Fourier domain. While the cost of the forward simulation increases as damping is reduced, this is counterbalanced by the cost of the outer inversion iteration, which is reduced because of a better starting model obtained from the larger damped wavefield used in the previous inversion experiment. For cross-well data, it is also possible to launch a successful inversion experiment without laddering the damping constants. With this type of acquisition geometry, the solver is still quite effective using a small fixed damping constant. To avoid cycle skipping, we also employ a multiscale imaging approach, in which frequency content of the data is also laddered (with the data now including both reflection and cross-well data acquisition geometries). Thus the inversion process is launched using low frequency data to first recover the long spatial wavelength of the image. With this image as a new starting model, adding higher frequency data refines and enhances the resolution of the image. FWI using laddered frequencies with an efficient damping schemed enables reconstructing elastic attributes of the subsurface at a resolution that approaches half the smallest wavelength utilized to image the subsurface. We show the possibility of effectively carrying out such reconstructions using two to six frequencies, depending upon the application. Using the proposed FWI scheme, massively parallel computing resources are essential for reasonable execution times.

  3. Short-Term Adaptive Modification of Dynamic Ocular Accommodation

    PubMed Central

    Bharadwaj, Shrikant R.; Vedamurthy, Indu; Schor, Clifton M.

    2009-01-01

    Purpose Indirect observations suggest that the neural control of accommodation may undergo adaptive recalibration in response to age-related biomechanical changes in the accommodative system. However, there has been no direct demonstration of such an adaptive capability. This investigation was conducted to demonstrate short-term adaptation of accommodative step response dynamics to optically induced changes in neuromuscular demands. Methods Repetitive changes in accommodative effort were induced in 15 subjects (18–34 years) with a double-step adaptation paradigm wherein an initial 2-D step change in blur was followed 350 ms later by either a 2-D step increase in blur (increasing-step paradigm) or a 1.75-D step decrease in blur (decreasing-step paradigm). Peak velocity, peak acceleration, and latency of 2-D single-step test responses were assessed before and after 1.5 hours of training with these paradigms. Results Peak velocity and peak acceleration of 2-D step responses increased after adaptation to the increasing-step paradigm (9/12 subjects), and they decreased after adaptation to the decreasing-step paradigm (4/9 subjects). Adaptive changes in peak velocity and peak acceleration generalized to responses that were smaller (1 D) and larger (3 D) than the 2-D adaptation stimulus. The magnitude of adaptation correlated poorly with the subject's age, but it was significantly negatively correlated with the preadaptation dynamics. Response latency decreased after adaptation, irrespective of the direction of adaptation. Conclusions Short-term adaptive changes in accommodative step response dynamics could be induced, at least in some of our subjects between 18 and 34 years, with a directional bias toward increasing rather than decreasing the dynamics. PMID:19255153

  4. Primary Accretion and Turbulent Cascades: Scale-Dependence of Particle Concentration Multiplier Probability Distribution Functions

    NASA Astrophysics Data System (ADS)

    Cuzzi, Jeffrey N.; Weston, B.; Shariff, K.

    2013-10-01

    Primitive bodies with 10s-100s of km diameter (or even larger) may form directly from small nebula constituents, bypassing the step-by-step “incremental growth” that faces a variety of barriers at cm, m, and even 1-10km sizes. In the scenario of Cuzzi et al (Icarus 2010 and LPSC 2012; see also Chambers Icarus 2010) the immediate precursors of 10-100km diameter asteroid formation are dense clumps of chondrule-(mm-) size objects. These predictions utilize a so-called cascade model, which is popular in turbulence studies. One of its usual assumptions is that certain statistical properties of the process (the so-called multiplier pdfs p(m)) are scale-independent within a cascade of energy from large eddy scales to smaller scales. In similar analyses, Pan et al (2011 ApJ) found discrepancies with results of Cuzzi and coworkers; one possibility was that p(m) for particle concentration is not scale-independent. To assess the situation we have analyzed recent 3D direct numerical simulations of particles in turbulence covering a much wider range of scales than analyzed by either Cuzzi and coworkers or by Pan and coworkers (see Bec et al 2010, J. Flu. Mech 646, 527). We calculated p(m) at scales ranging from 45-1024η where η is the Kolmogorov scale, for both particles with a range of stopping times spanning the optimum value, and for energy dissipation in the fluid. For comparison, the p(m) for dissipation have been observed to be scale-independent in atmospheric flows (at much larger Reynolds number) for scales of at least 30-3000η. We found that, in the numerical simulations, the multiplier distributions for both particle concentration and fluid dissipation are as expected at scales of tens of η, but both become narrower and less intermittent at larger scales. This is consistent with observations of atmospheric flows showing scale independence to >3000η if scale-free behavior is established only after some number 10 of large-scale bifurcations (at scales perhaps 10x smaller than the largest scales in the flow), but become scale-free at smaller scales. Predictions of primitive body initial mass functions can now be redone using a slightly modified cascade.

  5. Comparison of step-by-step kinematics in repeated 30m sprints in female soccer players.

    PubMed

    van den Tillaar, Roland

    2018-01-04

    The aim of this study was to compare kinematics in repeated 30m sprints in female soccer players. Seventeen subjects performed seven 30m sprints every 30s in one session. Kinematics were measured with an infrared contact mat and laser gun, and running times with an electronic timing device. The main findings were that sprint times increased in the repeated sprint ability test. The main changes in kinematics during the repeated sprint ability test were increased contact time and decreased step frequency, while no change in step length was observed. The step velocity increased in almost each step until the 14, which occurred around 22m. After this, the velocity was stable until the last step, when it decreased. This increase in step velocity was mainly caused by the increased step length and decreased contact times. It was concluded that the fatigue induced in repeated 30m sprints in female soccer players resulted in decreased step frequency and increased contact time. Employing this approach in combination with a laser gun and infrared mat for 30m makes it very easy to analyse running kinematics in repeated sprints in training. This extra information gives the athlete, coach and sports scientist the opportunity to give more detailed feedback and help to target these changes in kinematics better to enhance repeated sprint performance.

  6. Melatonin: a universal time messenger.

    PubMed

    Erren, Thomas C; Reiter, Russel J

    2015-01-01

    Temporal organization plays a key role in humans, and presumably all species on Earth. A core building block of the chronobiological architecture is the master clock, located in the suprachi asmatic nuclei [SCN], which organizes "when" things happen in sub-cellular biochemistry, cells, organs and organisms, including humans. Conceptually, time messenging should follow a 5 step-cascade. While abundant evidence suggests how steps 1 through 4 work, step 5 of "how is central time information transmitted througout the body?" awaits elucidation. Step 1: Light provides information on environmental (external) time; Step 2: Ocular interfaces between light and biological (internal) time are intrinsically photosensitive retinal ganglion cells [ipRGS] and rods and cones; Step 3: Via the retinohypothalamic tract external time information reaches the light-dependent master clock in the brain, viz the SCN; Step 4: The SCN translate environmental time information into biological time and distribute this information to numerous brain structures via a melanopsin-based network. Step 5: Melatonin, we propose, transmits, or is a messenger of, internal time information to all parts of the body to allow temporal organization which is orchestrated by the SCN. Key reasons why we expect melatonin to have such role include: First, melatonin, as the chemical expression of darkness, is centrally involved in time- and timing-related processes such as encoding clock and calendar information in the brain; Second, melatonin travels throughout the body without limits and is thus a ubiquitous molecule. The chemial conservation of melatonin in all tested species could make this molecule a candidate for a universal time messenger, possibly constituting a legacy of an all-embracing evolutionary history.

  7. Lateral step initiation behavior in older adults.

    PubMed

    Sparto, Patrick J; Jennings, J Richard; Furman, Joseph M; Redfern, Mark S

    2014-02-01

    Older adults have varied postural responses during induced and voluntary lateral stepping. The purpose of the research was to quantify the occurrence of different stepping strategies during lateral step initiation in older adults and to relate the stepping responses to retrospective history of falls. Seventy community-ambulating older adults (mean age 76 y, range 70-94 y) performed voluntary lateral steps as quickly as possible to the right or left in response to a visual cue, in a blocked design. Vertical ground reaction forces were measured using a forceplate, and the number and latency of postural adjustments were quantified. Subjects were assigned to groups based on their stepping strategy. The frequency of trials with one or two postural adjustments was compared with data from 20 younger adults (mean age 38 y, range 21-58 y). Logistic regression was used to relate presence of a fall in the previous year with the number and latency of postural adjustments. In comparison with younger adults, who almost always demonstrated one postural adjustment when stepping laterally, older adults constituted a continuous distribution in the percentage of step trials made with one postural adjustment (from 0% to 100% of trials). Latencies of the initial postural adjustment and foot liftoff varied depending on the number of postural adjustments made. A history of falls was associated a larger percentage of two postural adjustments, and a longer latency of foot liftoff. In conclusion, the number and latency of postural adjustments made during voluntary lateral stepping provides additional evidence that lateral control of posture may be a critical indicator of aging. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Lateral step initiation behavior in older adults

    PubMed Central

    Sparto, Patrick J; Jennings, J Richard; Furman, Joseph M; Redfern, Mark S

    2013-01-01

    Older adults have varied postural responses during induced and voluntary lateral stepping. The purpose of the research was to quantify the occurrence of different stepping strategies during lateral step initiation in older adults and to relate the stepping responses to retrospective history of falls. Seventy community-ambulating older adults (mean age 76 y, range 70–94 y) performed voluntary lateral steps as quickly as possible to the right or left in response to a visual cue, in a blocked design. Vertical ground reaction forces were measured using a forceplate, and the number and latency of postural adjustments were quantified. Subjects were assigned to groups based on their stepping strategy. The frequency of trials with one or two postural adjustments was compared with data from 20 younger adults (mean age 38 y, range 21–58 y). Logistic regression was used to relate presence of a fall in the previous year with the number and latency of postural adjustments. In comparison with younger adults, who almost always demonstrated one postural adjustment when stepping laterally, older adults constituted a continuous distribution in the percentage of step trials made with one postural adjustment (from 0% to 100% of trials). Latencies of the initial postural adjustment and foot liftoff varied depending on the number of postural adjustments made. A history of falls was associated a larger percentage of two postural adjustments, and a longer latency of foot liftoff. In conclusion, the number and latency of postural adjustments made during voluntary lateral stepping provides additional evidence that lateral control of posture may be a critical indicator of aging. PMID:24295896

  9. Comparison of step-by-step kinematics of resisted, assisted and unloaded 20-m sprint runs.

    PubMed

    van den Tillaar, Roland; Gamble, Paul

    2018-03-26

    This investigation examined step-by-step kinematics of sprint running acceleration. Using a randomised counterbalanced approach, 37 female team handball players (age 17.8 ± 1.6 years, body mass 69.6 ± 9.1 kg, height 1.74 ± 0.06 m) performed resisted, assisted and unloaded 20-m sprints within a single session. 20-m sprint times and step velocity, as well as step length, step frequency, contact and flight times of each step were evaluated for each condition with a laser gun and an infrared mat. Almost all measured parameters were altered for each step under the resisted and assisted sprint conditions (η 2  ≥ 0.28). The exception was step frequency, which did not differ between assisted and normal sprints. Contact time, flight time and step frequency at almost each step were different between 'fast' vs. 'slow' sub-groups (η 2  ≥ 0.22). Nevertheless overall both groups responded similarly to the respective sprint conditions. No significant differences in step length were observed between groups for the respective condition. It is possible that continued exposure to assisted sprinting might allow the female team-sports players studied to adapt their coordination to the 'over-speed' condition and increase step frequency. It is notable that step-by-step kinematics in these sprints were easy to obtain using relatively inexpensive equipment with possibilities of direct feedback.

  10. The General Alcoholics Anonymous Tools of Recovery: The Adoption of 12-Step Practices and Beliefs

    PubMed Central

    Greenfield, Brenna L.; Tonigan, J. Scott

    2013-01-01

    Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step-work have received minimal attention and even less is known about how step-work predicts later substance use. The current study (1) compared endorsements of step-work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step-work, the General Alcoholics Anonymous Tools of Recovery (GAATOR), (2) evaluated the underlying factor structure of the GAATOR and changes in step-work over time, (3) examined changes in the endorsement of step-work over time, and (4) investigated how, if at all, 12-step-work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step-work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising Behavioral Step-Work and Spiritual Step-Work. Behavioral Step-Work did not change over time, but was predicted by having a sponsor, while Spiritual Step-Work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral Step-Work did not prospectively predict substance use. In contrast, Spiritual Step-Work predicted percent days abstinent, an effect that is consistent with recent work on the mediating effects of spiritual growth, AA, and increased abstinence. Behavioral and Spiritual Step-Work appear to be conceptually distinct components of step-work that have distinct predictors and unique impacts on outcomes. PMID:22867293

  11. Measuring border delay and crossing times at the US-Mexico border : part II. Step-by-step guidelines for implementing a radio frequency identification (RFID) system to measure border crossing and wait times.

    DOT National Transportation Integrated Search

    2012-06-01

    The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...

  12. Movement patterns and the medium-sized city. Tenants on the move in Gweru, Zimbabwe.

    PubMed

    Grant, M

    1995-01-01

    During 1965-79, urban growth rates accelerated and continued after Zimbabwe's independence in 1980. For 1960-80, the estimated urban growth rate was 5.6% as compared with the natural growth rate of 3.5% and urban growth rate of 5.0% to 8.1% for the period 1982-92. Gweru, Zimbabwe, had a population of 110,000 in 1990, and as the provincial capital it is an important destination for rural and interurban migrants. Between 1982 and 1990 there was a 4.9% growth rate, resulting in the municipal waiting list for housing to exceed 14,000 in mid-1990. In a large study on migration and rental shelter, 188 tenants were interviewed in high, low-medium density, and periurban areas of the city with the intent of tracing respondents and the nature of migration streams. Regarding origins and connections, only one-fifth of the migrants were born in Gweru, more than half were born in rural areas, and the rest in other urban areas. More than 90% still had rural homes. Two-thirds made rural home visits six times or less a year and one-fourth visited seven times a year to once a month. 40% of the migrants to Gweru originated in larger cities, 24% in smaller urban areas, and 36% in rural areas. 58% moved to high density areas, 34% to low-medium, and 8% to peri-urban areas. The dominant motive was the search for employment and direct transfers, thus economic factors dominated over social factors. Three groups were distinguished according to length of stay: 1) 5 years or less who lived mainly in high and low-medium density housing; 2) 6-15 years; and 3) more than 15 years who lived in low density and high density areas. Regarding the previous two migrations, two-thirds stayed at the previous place for 5 years of less. The reasons for migration were overcrowding, family, and employment. Within Gweru high mobility was typical: one-third initiated one step, 43% initiated two steps, and 27% initiated three steps. Lodgers were the most mobile since one-third were moving three times.

  13. Rapid determination of 226Ra in emergency urine samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maxwell, Sherrod L.; Culligan, Brian K.; Hutchison, Jay B.

    2014-02-27

    A new method has been developed at the Savannah River National Laboratory (SRNL) that can be used for the rapid determination of 226Ra in emergency urine samples following a radiological incident. If a radiological dispersive device event or a nuclear accident occurs, there will be an urgent need for rapid analyses of radionuclides in urine samples to ensure the safety of the public. Large numbers of urine samples will have to be analyzed very quickly. This new SRNL method was applied to 100 mL urine aliquots, however this method can be applied to smaller or larger sample aliquots as needed.more » The method was optimized for rapid turnaround times; urine samples may be prepared for counting in <3 h. A rapid calcium phosphate precipitation method was used to pre-concentrate 226Ra from the urine sample matrix, followed by removal of calcium by cation exchange separation. A stacked elution method using DGA Resin was used to purify the 226Ra during the cation exchange elution step. This approach combines the cation resin elution step with the simultaneous purification of 226Ra with DGA Resin, saving time. 133Ba was used instead of 225Ra as tracer to allow immediate counting; however, 225Ra can still be used as an option. The rapid purification of 226Ra to remove interferences using DGA Resin was compared with a slightly longer Ln Resin approach. A final barium sulfate micro-precipitation step was used with isopropanol present to reduce solubility; producing alpha spectrometry sources with peaks typically <40 keV FWHM (full width half max). This new rapid method is fast, has very high tracer yield (>90 %), and removes interferences effectively. The sample preparation method can also be adapted to ICP-MS measurement of 226Ra, with rapid removal of isobaric interferences.« less

  14. Multiple-time-stepping generalized hybrid Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escribano, Bruno, E-mail: bescribano@bcamath.org; Akhmatskaya, Elena; IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC).more » The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.« less

  15. Mass imbalances in EPANET water-quality simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Janke, Robert; Taxon, Thomas N.

    EPANET is widely employed to simulate water quality in water distribution systems. However, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results, in general, only for small water-quality time steps; use of an adequately short time step may not be feasible. Overly long time steps can yield errors in concentrations and result in situations in which constituent mass is not conserved. Mass may not be conserved even when EPANET gives no errors or warnings. This paper explains how such imbalances can occur and provides examples of such cases; it also presents a preliminary event-driven approachmore » that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, to those obtained using the new approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations.« less

  16. Raman spectral post-processing for oral tissue discrimination – a step for an automatized diagnostic system

    PubMed Central

    Carvalho, Luis Felipe C. S.; Nogueira, Marcelo Saito; Neto, Lázaro P. M.; Bhattacharjee, Tanmoy T.; Martin, Airton A.

    2017-01-01

    Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings. PMID:29188115

  17. Raman spectral post-processing for oral tissue discrimination - a step for an automatized diagnostic system.

    PubMed

    Carvalho, Luis Felipe C S; Nogueira, Marcelo Saito; Neto, Lázaro P M; Bhattacharjee, Tanmoy T; Martin, Airton A

    2017-11-01

    Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings.

  18. Transformations of the distribution of nuclei formed in a nucleation pulse: Interface-limited growth.

    PubMed

    Shneidman, Vitaly A

    2009-10-28

    A typical nucleation-growth process is considered: a system is quenched into a supersaturated state with a small critical radius r( *) (-) and is allowed to nucleate during a finite time interval t(n), after which the supersaturation is abruptly reduced to a fixed value with a larger critical radius r( *) (+). The size-distribution of nucleated particles f(r,t) further evolves due to their deterministic growth and decay for r larger or smaller than r( *) (+), respectively. A general analytic expressions for f(r,t) is obtained, and it is shown that after a large growth time t this distribution approaches an asymptotic shape determined by two dimensionless parameters, lambda related to t(n), and Lambda=r( *) (+)/r( *) (-). This shape is strongly asymmetric with an exponential and double-exponential cutoffs at small and large sizes, respectively, and with a broad near-flat top in case of a long pulse. Conversely, for a short pulse the distribution acquires a distinct maximum at r=r(max)(t) and approaches a universal shape exp[zeta-e(zeta)], with zeta proportional to r-r(max), independent of the pulse duration. General asymptotic predictions are examined in terms of Zeldovich-Frenkel nucleation model where the entire transient behavior can be described in terms of the Lambert W function. Modifications for the Turnbull-Fisher model are also considered, and analytics is compared with exact numerics. Results are expected to have direct implementations in analysis of two-step annealing crystallization experiments, although other applications might be anticipated due to universality of the nucleation pulse technique.

  19. A grid-doubling finite-element technique for calculating dynamic three-dimensional spontaneous rupture on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael

    2009-01-01

    We present a new finite-element technique for calculating dynamic 3-D spontaneous rupture on an earthquake fault, which can reduce the required computational resources by a factor of six or more, without loss of accuracy. The grid-doubling technique employs small cells in a thin layer surrounding the fault. The remainder of the modelling volume is filled with larger cells, typically two or four times as large as the small cells. In the resulting non-conforming mesh, an interpolation method is used to join the thin layer of smaller cells to the volume of larger cells. Grid-doubling is effective because spontaneous rupture calculations typically require higher spatial resolution on and near the fault than elsewhere in the model volume. The technique can be applied to non-planar faults by morphing, or smoothly distorting, the entire mesh to produce the desired 3-D fault geometry. Using our FaultMod finite-element software, we have tested grid-doubling with both slip-weakening and rate-and-state friction laws, by running the SCEC/USGS 3-D dynamic rupture benchmark problems. We have also applied it to a model of the Hayward fault, Northern California, which uses realistic fault geometry and rock properties. FaultMod implements fault slip using common nodes, which represent motion common to both sides of the fault, and differential nodes, which represent motion of one side of the fault relative to the other side. We describe how to modify the traction-at-split-nodes method to work with common and differential nodes, using an implicit time stepping algorithm.

  20. Effect of quartz overgrowth precipitation on the multiscale porosity of sandstone: A (U)SANS and imaging analysis

    DOE PAGES

    Anovitz, Lawrence M.; Cole, David R.; Jackson, Andrew J.; ...

    2015-06-01

    We have performed a series of experiments to understand the effects of quartz overgrowths on nanometer to centimeter scale pore structures of sandstones. Blocks from two samples of St. Peter Sandstone with different initial porosities (5.8 and 18.3%) were reacted from 3 days to 7.5 months at 100 and 200 °C in aqueous solutions supersaturated with respect to quartz by reaction with amorphous silica. Porosity in the resultant samples was analyzed using small and ultrasmall angle neutron scattering and scanning electron microscope/backscattered electron (SEM/BSE)-based image-scale processing techniques.Significant changes were observed in the multiscale pore structures. By three days much ofmore » the overgrowth in the low-porosity sample dissolved away. The reason for this is uncertain, but the overgrowths can be clearly distinguished from the original core grains in the BSE images. At longer times the larger pores are observed to fill with plate-like precipitates. As with the unreacted sandstones, porosity is a step function of size. Grain boundaries are typically fractal, but no evidence of mass fractal or fuzzy interface behavior was observed suggesting a structural difference between chemical and clastic sediments. After the initial loss of the overgrowths, image scale porosity (>~1 cm) decreases with time. Submicron porosity (typically ~25% of the total) is relatively constant or slightly decreasing in absolute terms, but the percent change is significant. Fractal dimensions decrease at larger scales, and increase at smaller scales with increased precipitation.« less

  1. On the large eddy simulation of turbulent flows in complex geometry

    NASA Technical Reports Server (NTRS)

    Ghosal, Sandip

    1993-01-01

    Application of the method of Large Eddy Simulation (LES) to a turbulent flow consists of three separate steps. First, a filtering operation is performed on the Navier-Stokes equations to remove the small spatial scales. The resulting equations that describe the space time evolution of the 'large eddies' contain the subgrid-scale (sgs) stress tensor that describes the effect of the unresolved small scales on the resolved scales. The second step is the replacement of the sgs stress tensor by some expression involving the large scales - this is the problem of 'subgrid-scale modeling'. The final step is the numerical simulation of the resulting 'closed' equations for the large scale fields on a grid small enough to resolve the smallest of the large eddies, but still much larger than the fine scale structures at the Kolmogorov length. In dividing a turbulent flow field into 'large' and 'small' eddies, one presumes that a cut-off length delta can be sensibly chosen such that all fluctuations on a scale larger than delta are 'large eddies' and the remainder constitute the 'small scale' fluctuations. Typically, delta would be a length scale characterizing the smallest structures of interest in the flow. In an inhomogeneous flow, the 'sensible choice' for delta may vary significantly over the flow domain. For example, in a wall bounded turbulent flow, most statistical averages of interest vary much more rapidly with position near the wall than far away from it. Further, there are dynamically important organized structures near the wall on a scale much smaller than the boundary layer thickness. Therefore, the minimum size of eddies that need to be resolved is smaller near the wall. In general, for the LES of inhomogeneous flows, the width of the filtering kernel delta must be considered to be a function of position. If a filtering operation with a nonuniform filter width is performed on the Navier-Stokes equations, one does not in general get the standard large eddy equations. The complication is caused by the fact that a filtering operation with a nonuniform filter width in general does not commute with the operation of differentiation. This is one of the issues that we have looked at in detail as it is basic to any attempt at applying LES to complex geometry flows. Our principal findings are summarized.

  2. Faithful test of nonlocal realism with entangled coherent states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Chang-Woo; Jeong, Hyunseok; Paternostro, Mauro

    2011-02-15

    We investigate the violation of Leggett's inequality for nonlocal realism using entangled coherent states and various types of local measurements. We prove mathematically the relation between the violation of the Clauser-Horne-Shimony-Holt form of Bell's inequality and Leggett's one when tested by the same resources. For Leggett inequalities, we generalize the nonlocal realistic bound to systems in Hilbert spaces larger than bidimensional ones and introduce an optimization technique that allows one to achieve larger degrees of violation by adjusting the local measurement settings. Our work describes the steps that should be performed to produce a self-consistent generalization of Leggett's original argumentsmore » to continuous-variable states.« less

  3. Method for improving separation of carbohydrates from wood pulping and wood or biomass hydrolysis liquors

    DOEpatents

    Griffith, William Louis; Compere, Alicia Lucille; Leitten, Jr., Carl Frederick

    2010-04-20

    A method for separating carbohydrates from pulping liquors includes the steps of providing a wood pulping or wood or biomass hydrolysis pulping liquor having lignin therein, and mixing the liquor with an acid or a gas which forms an acid upon contact with water to initiate precipitation of carbohydrate to begin formation of a precipitate. During precipitation, at least one long chain carboxylated carbohydrate and at least one cationic polymer, such as a polyamine or polyimine are added, wherein the precipitate aggregates into larger precipitate structures. Carbohydrate gel precipitates are then selectively removed from the larger precipitate structures. The method process yields both a carbohydrate precipitate and a high purity lignin.

  4. Quantum memory operations in a flux qubit - spin ensemble hybrid system

    NASA Astrophysics Data System (ADS)

    Saito, S.; Zhu, X.; Amsuss, R.; Matsuzaki, Y.; Kakuyanagi, K.; Shimo-Oka, T.; Mizuochi, N.; Nemoto, K.; Munro, W. J.; Semba, K.

    2014-03-01

    Superconducting quantum bits (qubits) are one of the most promising candidates for a future large-scale quantum processor. However for larger scale realizations the currently reported coherence times of these macroscopic objects (superconducting qubits) has not yet reached those of microscopic systems (electron spins, nuclear spins, etc). In this context, a superconductor-spin ensemble hybrid system has attracted considerable attention. The spin ensemble could operate as a quantum memory for superconducting qubits. We have experimentally demonstrated quantum memory operations in a superconductor-diamond hybrid system. An excited state and a superposition state prepared in the flux qubit can be transferred to, stored in and retrieved from the NV spin ensemble in diamond. From these experiments, we have found the coherence time of the spin ensemble is limited by the inhomogeneous broadening of the electron spin (4.4 MHz) and by the hyperfine coupling to nitrogen nuclear spins (2.3 MHz). In the future, spin echo techniques could eliminate these effects and elongate the coherence time. Our results are a significant first step in utilizing the spin ensemble as long-lived quantum memory for superconducting flux qubits. This work was supported by the FIRST program and NICT.

  5. Past and future challenges from a display mask writer perspective

    NASA Astrophysics Data System (ADS)

    Ekberg, Peter; von Sydow, Axel

    2012-06-01

    Since its breakthrough, the liquid crystal technology has continued to gain momentum and the LCD is today the dominating display type used in desktop monitors, television sets, mobile phones as well as other mobile devices. To improve production efficiency and enable larger screen sizes, the LCD industry has step by step increased the size of the mother glass used in the LCD manufacturing process. Initially the mother glass was only around 0.1 m2 large, but with each generation the size has increased and with generation 10 the area reaches close to 10 m2. The increase in mother glass size has in turn led to an increase in the size of the photomasks used - currently the largest masks are around 1.6 × 1.8 meters. A key mask performance criterion is the absence of "mura" - small systematic errors captured only by the very sensitive human eye. To eliminate such systematic errors, special techniques have been developed by Micronic Mydata. Some mura suppressing techniques are described in this paper. Today, the race towards larger glass sizes has come to a halt and a new race - towards higher resolution and better image quality - is ongoing. The display mask is therefore going through a change that resembles what the semiconductor mask went through some time ago: OPC features are introduced, CD requirements are increasing sharply and multi tone masks (MTMs) are widely used. Supporting this development, Micronic Mydata has introduced a number of compensation methods in the writer, such as Z-correction, CD map and distortion control. In addition, Micronic Mydata MMS15000, the world's most precise large area metrology tool, has played an important role in improving mask placement quality and is briefly described in this paper. Furthermore, proposed specifications and system architecture concept for a new generation mask writers - able to fulfill future image quality requirements - is presented in this paper. This new system would use an AOD/AOM writing engine and be capable of resolving 0.6 micron features.

  6. Exoskeleton-assisted gait training to improve gait in individuals with spinal cord injury: a pilot randomized study.

    PubMed

    Chang, Shuo-Hsiu; Afzal, Taimoor; Berliner, Jeffrey; Francisco, Gerard E

    2018-01-01

    Robotic wearable exoskeletons have been utilized as a gait training device in persons with spinal cord injury. This pilot study investigated the feasibility of offering exoskeleton-assisted gait training (EGT) on gait in individuals with incomplete spinal cord injury (iSCI) in preparation for a phase III RCT. The objective was to assess treatment reliability and potential efficacy of EGT and conventional physical therapy (CPT). Forty-four individuals were screened, and 13 were eligible to participate in the study. Nine participants consented and were randomly assigned to receive either EGT or CPT with focus on gait. Subjects received EGT or CPT, five sessions a week (1 h/session daily) for 3 weeks. American Spinal Injury Association (ASIA) Lower Extremity Motor Score (LEMS), 10-Meter Walk Test (10MWT), 6-Minute Walk Test (6MWT), Timed Up and Go (TUG) test, and gait characteristics including stride and step length, cadence and stance, and swing phase durations were assessed at the pre- and immediate post- training. Mean difference estimates with 95% confidence intervals were used to analyze the differences. After training, improvement was observed in the 6MWT for the EGT group. The CPT group showed significant improvement in the TUG test. Both the EGT and the CPT groups showed significant increase in the right step length. EGT group also showed improvement in the stride length. EGT could be applied to individuals with iSCI to facilitate gait recovery. The subjects were able to tolerate the treatment; however, exoskeleton size range may be a limiting factor in recruiting larger cohort of patients. Future studies with larger sample size are needed to investigate the effectiveness and efficacy of exoskeleton-assisted gait training as single gait training and combined with other gait training strategies. Clinicaltrials.org, NCT03011099, retrospectively registered on January 3, 2017.

  7. The stepping behavior analysis of pedestrians from different age groups via a single-file experiment

    NASA Astrophysics Data System (ADS)

    Cao, Shuchao; Zhang, Jun; Song, Weiguo; Shi, Chang'an; Zhang, Ruifang

    2018-03-01

    The stepping behavior of pedestrians with different age compositions in single-file experiment is investigated in this paper. The relation between step length, step width and stepping time are analyzed by using the step measurement method based on the calculation of curvature of the trajectory. The relations of velocity-step width, velocity-step length and velocity-stepping time for different age groups are discussed and compared with previous studies. Finally effects of pedestrian gender and height on stepping laws and fundamental diagrams are analyzed. The study is helpful for understanding pedestrian dynamics of movement. Meanwhile, it offers experimental data to develop a microscopic model of pedestrian movement by considering stepping behavior.

  8. Increasing older adults’ walking through primary care: results of a pilot randomized controlled trial

    PubMed Central

    Mutrie, Nanette

    2012-01-01

    Background. Physical activity can positively influence health for older adults. Primary care is a good setting for physical activity promotion. Objective. To assess the feasibility of a pedometer-based walking programme in combination with physical activity consultations. Methods. Design: Two-arm (intervention/control) 12-week randomized controlled trial with a 12-week follow-up for the intervention group. Setting: One general practice in Glasgow, UK. Participants: Participants were aged ≥65 years. The intervention group received two 30-minute physical activity consultations from a trained practice nurse, a pedometer and a walking programme. The control group continued as normal for 12 weeks and then received the intervention. Both groups were followed up at 12 and 24 weeks. Outcome measures: Step counts were measured by sealed pedometers and an activPALTM monitor. Psychosocial variables were assessed and focus groups conducted. Results. The response rate was 66% (187/284), and 90% of those randomized (37/41) completed the study. Qualitative data suggested that the pedometer and nurse were helpful to the intervention. Step counts (activPAL) showed a significant increase from baseline to week 12 for the intervention group, while the control group showed no change. Between weeks 12 and 24, step counts were maintained in the intervention group, and increased for the control group after receiving the intervention. The intervention was associated with improved quality of life and reduced sedentary time. Conclusions. It is feasible to recruit and retain older adults from primary care and help them increase walking. A larger trial is necessary to confirm findings and consider cost-effectiveness. PMID:22843637

  9. Research on optimal DEM cell size for 3D visualization of loess terraces

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei

    2009-10-01

    In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.

  10. Physiologically based pharmacokinetic modeling using microsoft excel and visual basic for applications.

    PubMed

    Marino, Dale J

    2005-01-01

    Abstract Physiologically based pharmacokinetic (PBPK) models are mathematical descriptions depicting the relationship between external exposure and internal dose. These models have found great utility for interspecies extrapolation. However, specialized computer software packages, which are not widely distributed, have typically been used for model development and utilization. A few physiological models have been reported using more widely available software packages (e.g., Microsoft Excel), but these tend to include less complex processes and dose metrics. To ascertain the capability of Microsoft Excel and Visual Basis for Applications (VBA) for PBPK modeling, models for styrene, vinyl chloride, and methylene chloride were coded in Advanced Continuous Simulation Language (ACSL), Excel, and VBA, and simulation results were compared. For styrene, differences between ACSL and Excel or VBA compartment concentrations and rates of change were less than +/-7.5E-10 using the same numerical integration technique and time step. Differences using VBA fixed step or ACSL Gear's methods were generally <1.00E-03, although larger differences involving very small values were noted after exposure transitions. For vinyl chloride and methylene chloride, Excel and VBA PBPK model dose metrics differed by no more than -0.013% or -0.23%, respectively, from ACSL results. These differences are likely attributable to different step sizes rather than different numerical integration techniques. These results indicate that Microsoft Excel and VBA can be useful tools for utilizing PBPK models, and given the availability of these software programs, it is hoped that this effort will help facilitate the use and investigation of PBPK modeling.

  11. Performance in physical examination on the USMLE Step 2 Clinical Skills examination.

    PubMed

    Peitzman, Steven J; Cuddy, Monica M

    2015-02-01

    To provide descriptive information about history-taking (HX) and physical examination (PE) performance for U.S. medical students as documented by standardized patients (SPs) during the Step 2 Clinical Skills (CS) component of the United States Medical Licensing Examination. The authors examined two hypotheses: (1) Students perform worse in PE compared with HX, and (2) for PE, students perform worse in the musculoskeletal system and neurology compared with other clinical domains. The sample included 121,767 student-SP encounters based on 29,442 examinees from U.S. medical schools who took Step 2 CS for the first time in 2011. The encounters comprised 107 clinical presentations, each categorized into one of five clinical domains: cardiovascular, gastrointestinal, musculoskeletal, neurological, and respiratory. The authors compared mean percent-correct scores for HX and PE via a one-tailed paired-samples t test and examined mean score differences by clinical domain using analysis of variance techniques. Average PE scores (59.6%) were significantly lower than average HX scores (78.1%). The range of scores for PE (51.4%-72.7%) was larger than for HX (74.4%-81.0%), and the standard deviation for PE scores (28.3) was twice as large as the HX standard deviation (14.7). PE performance was significantly weaker for musculoskeletal and neurological encounters compared with other encounters. U.S. medical students perform worse on PE than HX; PE performance was weakest in musculoskeletal and neurology clinical domains. Findings may reflect imbalances in U.S. medical education, but more research is needed to fully understand the relationships among PE instruction, assessment, and proficiency.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nganga, John K.; Samanamu, Christian R.; Tanski, Joseph M.

    In a series of rhenium tricarbonyl complexes coordinated by asymmetric diimine ligands containing a pyridine moiety bound to an oxazoline ring were synthesized, structurally and electrochemically characterized, and screened for CO 2 reduction ability. We reported complexes are of the type Re(N-N)(CO) 3Cl, with N-N = 2-(pyridin-2-yl)-4,5-dihydrooxazole (1), 5-methyl-2-(pyridin-2-yl)-4,5-dihydrooxazole (2), and 5-phenyl-2-(pyridin-2-yl)-4,5-dihydrooxazole (3). The electrocatalytic reduction of CO 2 by these complexes was observed in a variety of solvents and proceeds more quickly in acetonitrile than in dimethylformamide (DMF) and dimethyl sulfoxide (DMSO). The analysis of the catalytic cycle for electrochemical CO 2 reduction by 1 in acetonitrile using densitymore » functional theory (DFT) supports the C–O bond cleavage step being the rate-determining step (RDS) (ΔG ‡ = 27.2 kcal mol –1). Furthermore, the dependency of the turnover frequencies (TOFs) on the donor number (DN) of the solvent also supports that C–O bond cleavage is the rate-determining step. Moreover, the calculations using explicit solvent molecules indicate that the solvent dependence likely arises from a protonation-first mechanism. Unlike other complexes derived from fac-Re(bpy)(CO) 3Cl (I; bpy = 2,2'-bipyridine), in which one of the pyridyl moieties in the bpy ligand is replaced by another imine, no catalytic enhancement occurs during the first reduction potential. Remarkably, catalysts 1 and 2 display relative turnover frequencies, (i cat/i p) 2, up to 7 times larger than that of I.« less

  13. Prenatal testosterone and stuttering.

    PubMed

    Montag, Christian; Bleek, Benjamin; Breuer, Svenja; Prüss, Holger; Richardt, Kirsten; Cook, Susanne; Yaruss, J Scott; Reuter, Martin

    2015-01-01

    The prevalence of stuttering is much higher in males compared to females. The biological underpinnings of this skewed sex-ratio is poorly understood, but it has often been speculated that sex hormones could play an important role. The present study investigated a potential link between prenatal testosterone and stuttering. Here, an indirect indicator of prenatal testosterone levels, the Digit Ratio (2D:4D) of the hand, was used. As numerous studies have shown, hands with more "male" characteristics (putatively representing greater prenatal testosterone levels) are characterized by a longer ring finger compared to the index finger (represented as a lower 2D:4D ratio) in the general population. We searched for differences in the 2D:4D ratios between 38 persons who stutter and 36 persons who do not stutter. In a second step, we investigated potential links between the 2D:4D ratio and the multifaceted symptomatology of stuttering, as measured by the Overall Assessment of the Speaker's Experience of Stuttering (OASES), in a larger sample of 44 adults who stutter. In the first step, no significant differences in the 2D:4D were observed between individuals who stutter and individuals who do not stutter. In the second step, 2D:4D correlated negatively with higher scores of the OASES (representing higher negative experiences due to stuttering), and this effect was more pronounced for female persons who stutter. The findings indicate for the first time that prenatal testosterone may influence individual differences in psychosocial impact of this speech disorder. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Statistical mechanics of addition polymerisation. Calculations of the expectation and variance of the average atmosphere in growing self avoiding walks

    NASA Astrophysics Data System (ADS)

    Taniya, Abraham; Deepthi, Murali; Padmanabhan, Alapat

    2018-06-01

    Recent calculations on the change in radial dimensions of reacting (growing) polyethylene in the gas phase experiencing Lennard Jones and Kihara type potentials revealed that a single reacting polyethylene molecule does not experience polymer collapse. This implies that a transition that is the converse of what happens when molten polyethylene crystallizes, i.e. it transforms from random coil like structure to folded rigid rod type structure, occurs. The predicted behaviour of growing polyethylene was explained by treating the head of the growing polymer chain as myopic whereas as the whole chain (i.e. when under equilibrium conditions) being treated as having normal vision, i.e. the growing chain does not see the attractive part of the LJ or Kihara Potentials. In this paper we provide further proof for this argument in two ways. Firstly we carry forward the exact enumeration calculations on growing self avoiding walks reported in that paper to larger values of number of steps by using Monte Carlo type calculations. We thereby assign physical significance to the connective constant of self avoiding walks, which until now was treated as a purely abstract mathematical entity. Secondly since a reacting polymer molecule that grows by addition polymerisation sees only one step ahead at a time, we extend this calculation by estimating the average atmosphere for molecules, with repulsive potential only (growing self avoiding walks in two dimensions), that look at two, three, four, five ...steps ahead. Our calculation shows that the arguments used in the previous work are correct.

  15. Anticipatory Postural Adjustment During Self-Initiated, Cued, and Compensatory Stepping in Healthy Older Adults and Patients With Parkinson Disease.

    PubMed

    Schlenstedt, Christian; Mancini, Martina; Horak, Fay; Peterson, Daniel

    2017-07-01

    To characterize anticipatory postural adjustments (APAs) across a variety of step initiation tasks in people with Parkinson disease (PD) and healthy subjects. Cross-sectional study. Step initiation was analyzed during self-initiated gait, perceptual cued gait, and compensatory forward stepping after platform perturbation. People with PD were assessed on and off levodopa. University research laboratory. People (N=31) with PD (n=19) and healthy aged-matched subjects (n=12). Not applicable. Mediolateral (ML) size of APAs (calculated from center of pressure recordings), step kinematics, and body alignment. With respect to self-initiated gait, the ML size of APAs was significantly larger during the cued condition and significantly smaller during the compensatory condition (P<.001). Healthy subjects and patients with PD did not differ in body alignment during the stance phase prior to stepping. No significant group effect was found for ML size of APAs between healthy subjects and patients with PD. However, the reduction in APA size from cued to compensatory stepping was significantly less pronounced in PD off medication compared with healthy subjects, as indicated by a significant group by condition interaction effect (P<.01). No significant differences were found comparing patients with PD on and off medications. Specific stepping conditions had a significant effect on the preparation and execution of step initiation. Therefore, APA size should be interpreted with respect to the specific stepping condition. Across-task changes in people with PD were less pronounced compared with healthy subjects. Antiparkinsonian medication did not significantly improve step initiation in this mildly affected PD cohort. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  16. Maximum magnitude in the Lower Rhine Graben

    NASA Astrophysics Data System (ADS)

    Vanneste, Kris; Merino, Miguel; Stein, Seth; Vleminckx, Bart; Brooks, Eddie; Camelbeeck, Thierry

    2014-05-01

    Estimating Mmax, the assumed magnitude of the largest future earthquakes expected on a fault or in an area, involves large uncertainties. No theoretical basis exists to infer Mmax because even where we know the long-term rate of motion across a plate boundary fault, or the deformation rate across an intraplate zone, neither predict how strain will be released. As a result, quite different estimates can be made based on the assumptions used. All one can say with certainty is that Mmax is at least as large as the largest earthquake in the available record. However, because catalogs are often short relative to the average recurrence time of large earthquakes, larger earthquakes than anticipated often occur. Estimating Mmax is especially challenging within plates, where deformation rates are poorly constrained, large earthquakes are rarer and variable in space and time, and often occur on previously unrecognized faults. We explore this issue for the Lower Rhine Graben seismic zone where the largest known earthquake, the 1756 Düren earthquake, has magnitude 5.7 and should occur on average about every 400 years. However, paleoseismic studies suggest that earthquakes with magnitudes up to 6.7 occurred during the Late Pleistocene and Holocene. What to assume for Mmax is crucial for critical facilities like nuclear power plants that should be designed to withstand the maximum shaking in 10,000 years. Using the observed earthquake frequency-magnitude data, we generate synthetic earthquake histories, and sample them over shorter intervals corresponding to the real catalog's completeness. The maximum magnitudes appearing most often in the simulations tend to be those of earthquakes with mean recurrence time equal to the catalog length. Because catalogs are often short relative to the average recurrence time of large earthquakes, we expect larger earthquakes than observed to date to occur. In a next step, we will compute hazard maps for different return periods based on the synthetic catalogs, in order to determine the influence of underestimating Mmax.

  17. Fast large-scale clustering of protein structures using Gauss integrals.

    PubMed

    Harder, Tim; Borg, Mikael; Boomsma, Wouter; Røgen, Peter; Hamelryck, Thomas

    2012-02-15

    Clustering protein structures is an important task in structural bioinformatics. De novo structure prediction, for example, often involves a clustering step for finding the best prediction. Other applications include assigning proteins to fold families and analyzing molecular dynamics trajectories. We present Pleiades, a novel approach to clustering protein structures with a rigorous mathematical underpinning. The method approximates clustering based on the root mean square deviation by first mapping structures to Gauss integral vectors--which were introduced by Røgen and co-workers--and subsequently performing K-means clustering. Compared to current methods, Pleiades dramatically improves on the time needed to perform clustering, and can cluster a significantly larger number of structures, while providing state-of-the-art results. The number of low energy structures generated in a typical folding study, which is in the order of 50,000 structures, can be clustered within seconds to minutes.

  18. Dielectric Properties of Boron Nitride-Ethylene Glycol (BN-EG) Nanofluids

    NASA Astrophysics Data System (ADS)

    Fal, Jacek; Cholewa, Marian; Gizowska, Magdalena; Witek, Adam; ŻyŁa, GaweŁ

    2017-02-01

    This paper presents the results of experimental investigation of the dielectric properties of ethylene glycol (EG) with various load of boron nitride (BN) nanoparticles. The nanofuids were prepared by using a two-step method on the basis of commercially available BN nanoparticles. The measurements were carried out using the Concept 80 System (NOVOCONTROL Technologies GmbH & Co. KG, Montabaur, Germany) in a frequency range from 10 Hz to 10 MHz and temperatures from 278.15 K to 328.15 K. The frequency-dependent real (ɛ ^' }) and imaginary (ɛ ^' ' }) parts of the complex permittivity (ɛ ^*) and the alternating current (AC) conductivity are presented. Also, the effect of temperature and mass concentrations on the dielectric properties of BN-EG nanofluids are demonstrated. The results show that the most significant increase can be achieved for 20 wt.% of BN nanoparticles at 283.15 K and 288.15 K, that is eleven times larger than in the case of pure EG.

  19. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  20. Soft liquid phase adsorption for fabrication of organic semiconductor films on wettability patterned surfaces.

    PubMed

    Watanabe, Satoshi; Akiyoshi, Yuri; Matsumoto, Mutsuyoshi

    2014-01-01

    We report a soft liquid-phase adsorption (SLPA) technique for the fabrication of organic semiconductor films on wettability-patterned substrates using toluene/water emulsions. Wettability-patterned substrates were obtained by the UV-ozone treatment of self-assembled monolayers of silane coupling agents on glass plates using a metal mask. Organic semiconductor polymer films were formed selectively on the hydrophobic part of the wettability-patterned substrates. The thickness of the films fabricated by the SLPA technique is significantly larger than that of the films fabricated by dip-coating and spin-coating techniques. The film thickness can be controlled by adjusting the volume ratio of toluene to water, immersion angle, immersion temperature, and immersion time. The SLPA technique allows for the direct production of organic semiconductor films on wettability-patterned substrates with minimized material consumption and reduced number of fabrication steps.

  1. A triggered mechanism retrieves membrane in seconds after Ca(2+)- stimulated exocytosis in single pituitary cells

    PubMed Central

    1994-01-01

    In neuroendocrine cells, cytosolic Ca2+ triggers exocytosis in tens of milliseconds, yet known pathways of endocytic membrane retrieval take minutes. To test for faster retrieval mechanisms, we have triggered short bursts of exocytosis by flash photolysis of caged Ca2+, and have tracked subsequent retrieval by measuring the plasma membrane capacitance. We find that a limited amount of membrane can be retrieved with a time constant of 4 s at 21-26 degrees C, and that this occurs partially via structures larger than coated vesicles. This novel mechanism may be arrested at a late step. Incomplete retrieval structures then remain on the cell surface for minutes until the consequences of a renewed increase in cytosolic [Ca2+] disconnect them from the cell surface in < 1 s. Our results provide evidence for a rapid, triggered membrane retrieval pathway in excitable cells. PMID:8120090

  2. Solid-state NMR studies of proteins immobilized on inorganic surfaces

    DOE PAGES

    Shaw, Wendy J.

    2014-10-29

    Solid state NMR is the primary tool for studying the quantitative, site-specific structure, orientation, and dynamics of biomineralization proteins under biologically relevant conditions. Two calcium phosphate proteins, statherin and leucine rich amelogenin protein (LRAP), have been studied in depth and have different features, challenging our ability to extract design principles. More recent studies of the significantly larger full-length amelogenin represent a challenging but necessary step to ultimately investigate the full diversity of biomineralization proteins. Interactions of amino acids and silaffin peptide with silica are also being studied, along with qualitative studies of proteins interacting with calcium carbonate. Dipolar recoupling techniquesmore » have formed the core of the quantitative studies, yet, the need for isolated spin pairs makes this approach costly and time intensive. The use of multi-dimensional techniques is advancing, methodology which, despite its challenges with these difficult-to-study proteins, will continue to drive future advancements in this area.« less

  3. A subjective evaluation of high-chroma color with wide color-gamut display

    NASA Astrophysics Data System (ADS)

    Kishimoto, Junko; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2009-01-01

    Displays tends to expand its color gamut, such as multi-primary color display, Adobe RGB and so on. Therefore displays got possible to display high chroma colors. However sometimes, we feel unnatural some for the image which only expanded chroma. Appropriate gamut mapping method to expand color gamut is not proposed very much. We are attempting preferred expanded color reproduction on wide color gamut display utilizing high chroma colors effectively. As a first step, we have conducted an experiment to investigate the psychological effect of color schemes including highly saturated colors. We used the six-primary-color projector that we have developed for the presentation of test colors. The six-primary-color projector's gamut volume in CIELAB space is about 1.8 times larger than the normal RGB projector. We conducted a subjective evaluation experiment using the SD (Semantic Differential) technique to find the quantitative psychological effect of high chroma colors.

  4. Simulation of a 7.7 MW onshore wind farm with the Actuator Line Model

    NASA Astrophysics Data System (ADS)

    Guggeri, A.; Draper, M.; Usera, G.

    2017-05-01

    Recently, the Actuator Line Model (ALM) has been evaluated with coarser resolution and larger time steps than what is generally recommended, taking into account an atmospheric sheared and turbulent inflow condition. The aim of the present paper is to continue these studies, assessing the capability of the ALM to represent the wind turbines’ interactions in an onshore wind farm. The ‘Libertad’ wind farm, which consists of four 1.9MW Vestas V100 wind turbines, was simulated considering different wind directions, and the results were compared with the wind farm SCADA data, finding good agreement between them. A sensitivity analysis was performed to evaluate the influence of the spatial resolution, finding acceptable agreement, although some differences were found. It is believed that these differences are due to the characteristics of the different Atmospheric Boundary Layer (ABL) simulations taken as inflow condition (precursor simulations).

  5. Stochastic derivative-free optimization using a trust region framework

    DOE PAGES

    Larson, Jeffrey; Billups, Stephen C.

    2016-02-17

    This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less

  6. Complex-valued multistate associative memory with nonlinear multilevel functions for gray-level image reconstruction.

    PubMed

    Tanaka, Gouhei; Aihara, Kazuyuki

    2009-09-01

    A widely used complex-valued activation function for complex-valued multistate Hopfield networks is revealed to be essentially based on a multilevel step function. By replacing the multilevel step function with other multilevel characteristics, we present two alternative complex-valued activation functions. One is based on a multilevel sigmoid function, while the other on a characteristic of a multistate bifurcating neuron. Numerical experiments show that both modifications to the complex-valued activation function bring about improvements in network performance for a multistate associative memory. The advantage of the proposed networks over the complex-valued Hopfield networks with the multilevel step function is more outstanding when a complex-valued neuron represents a larger number of multivalued states. Further, the performance of the proposed networks in reconstructing noisy 256 gray-level images is demonstrated in comparison with other recent associative memories to clarify their advantages and disadvantages.

  7. Process for preparation of large-particle-size monodisperse latexes

    NASA Technical Reports Server (NTRS)

    Vanderhoff, J. W.; Micale, F. J.; El-Aasser, M. S.; Kornfeld, D. M. (Inventor)

    1981-01-01

    Monodisperse latexes having a particle size in the range of 2 to 40 microns are prepared by seeded emulsion polymerization in microgravity. A reaction mixture containing smaller monodisperse latex seed particles, predetermined amounts of monomer, emulsifier, initiator, inhibitor and water is placed in a microgravity environment, and polymerization is initiated by heating. The reaction is allowed to continue until the seed particles grow to a predetermined size, and the resulting enlarged particles are then recovered. A plurality of particle-growing steps can be used to reach larger sizes within the stated range, with enlarge particles from the previous steps being used as seed particles for the succeeding steps. Microgravity enables preparation of particles in the stated size range by avoiding gravity related problems of creaming and settling, and flocculation induced by mechanical shear that have precluded their preparation in a normal gravity environment.

  8. Association of activities of daily living with the load during step ascent motion in nursing home-residing elderly individuals.

    PubMed

    Masaki, Mitsuhiro; Ikezoe, Tome; Kamiya, Midori; Araki, Kojiro; Isono, Ryo; Kato, Takehiro; Kusano, Ken; Tanaka, Masayo; Sato, Syunsuke; Hirono, Tetsuya; Kita, Kiyoshi; Tsuboyama, Tadao; Ichihashi, Noriaki

    2018-04-19

    This study aimed to examine the association of independence in ADL with the loads during step ascent motion and other motor functions in 32 nursing home-residing elderly individuals. Independence in ADL was assessed by using the functional independence measure (FIM). The loads at the upper (i.e., pulling up) and lower (i.e., pushing up) levels during step ascent task was measured on a step ascent platform. Hip extensor, knee extensor, plantar flexor muscle, and quadriceps setting strengths; lower extremity agility using the stepping test; and hip and knee joint pain severities were measured. One-legged stance and functional reach distance for balance, and maximal walking speed, timed up-and-go (TUG) time, five-chair-stand time, and step ascent time were also measured to assess mobility. Stepwise regression analysis revealed that the load at pushing up during step ascent motion and TUG time were significant and independent determinants of FIM score. FIM score decreased with decreased the load at pushing up and increased TUG time. The study results suggest that depending on task specificity, both one step up task's push up peak load during step ascent motion and TUG, can partially explain ADL's FIM score in the nursing home-residing elderly individuals. Lower extremity muscle strength, agility, pain or balance measures did not add to the prediction.

  9. Toward a Practical Model of Cognitive/Information Task Analysis and Schema Acquisition for Complex Problem-Solving Situations.

    ERIC Educational Resources Information Center

    Braune, Rolf; Foshay, Wellesley R.

    1983-01-01

    The proposed three-step strategy for research on human information processing--concept hierarchy analysis, analysis of example sets to teach relations among concepts, and analysis of problem sets to build a progressively larger schema for the problem space--may lead to practical procedures for instructional design and task analysis. Sixty-four…

  10. Fermilab | Tritium at Fermilab | Ferry Creek Results

    Science.gov Websites

    newsletter Ferry Creek Results chart This chart (click chart for larger version) shows the levels of tritium following the detection of low levels of tritium in Indian Creek in November 2005. The levels of tritium in . Fermilab continues to monitor the ponds and creeks on its site and take steps to keep the levels of tritium

  11. A Novel WA-BPM Based on the Generalized Multistep Scheme in the Propagation Direction in the Waveguide

    NASA Astrophysics Data System (ADS)

    Ji, Yang; Chen, Hong; Tang, Hongwu

    2017-06-01

    A highly accurate wide-angle scheme, based on the generalized mutistep scheme in the propagation direction, is developed for the finite difference beam propagation method (FD-BPM). Comparing with the previously presented method, the simulation shows that our method results in a more accurate solution, and the step size can be much larger

  12. Defining and Operationalizing the Construct of Pragmatic Competence: Review and Recommendations. Research Report. ETS RR-15-06

    ERIC Educational Resources Information Center

    Timpe Laughlin, Veronika; Wain, Jennifer; Schmidgall, Jonathan

    2015-01-01

    This review paper constitutes the first step within a larger research effort to develop an interactive pragmatics learning tool for second and foreign language (L2) learners and users of English. The tool will primarily endeavor to support pragmatics learning within the language use domain "workplace." Given this superordinate objective,…

  13. Field performance of Quercus bicolor established as repeatedly air-root-pruned container and bareroot planting stock

    Treesearch

    J.W." Jerry" Van Sambeek; Larry D. Godsey; William D. Walter; Harold E. Garrett; John P. Dwyer

    2016-01-01

    Benefits of repeated air-root-pruning of seedlings when stepping up to progressively larger containers include excellent lateral root distribution immediately below the root collar and an exceptionally fibrous root ball. To evaluate long-term field performance of repeatedly air-root-pruned container stock, three plantings of swamp white oak (Quercus bicolor...

  14. A Step Toward Equal Justice: Programs to Increase Black Lawyers in the South.

    ERIC Educational Resources Information Center

    Evans, Eli, Ed.

    An extensive evaluation of 5 years of grants from private foundations, corporations, and individuals to increase the number of black lawyers in the South is summarized in this pamphlet. The results of the evaluation of the Earl Warren Legal Training Program, Inc., and the Law Students Civil Rights Research Council indicated: (1) Larger numbers of…

  15. Saturn Apollo Program

    NASA Image and Video Library

    2004-04-15

    Saturn 1 Launch summary of research and development flights and operational flights. NASA's initial development plan for the Saturn program had called for the Saturn I to serve as a stepping stone to the development of larger Saturn vehicles ultimately known as the Saturn IB and Saturn V. The Saturn I launch vehicle proved the feasibility of the clustered engines and provided significant new payload lifting capabilities.

  16. Pulse measurement of the hot spot current in a NbTiN superconducting filament

    NASA Astrophysics Data System (ADS)

    Harrabi, K.; Mekki, A.; Kunwar, S.; Maneval, J. P.

    2018-02-01

    We have studied the voltage response of superconducting NbTiN filaments to a step-pulse of over-critical current I > Ic. The current induces the destruction of the Cooper pairs and initiates different mechanisms of dissipation depending on the bath temperature T. For the sample investigated, and for T above a certain T*, not far from Tc, the resistance manifests itself in the form of a phase-slip center, which turns into a normal hot spot (HS) as the step-pulse is given larger amplitudes. However, at all temperatures below T*, the destruction of superconductivity still occurs at Ic(T), but leads directly to an ever-growing HS. By lowering the current amplitude during the pulse, one can produce a steady HS and thus define a threshold HS current Ih(T). That is achieved by combining two levels of current, the first and larger one to initiate an HS, the second one to search for constant voltage response. The double diagram of the functions Ic(T) and Ih(T) was plotted in the T-range Tc/2 < T < Tc, and their crossing found at T* = (8.07 ± 0.07) K.

  17. Using an individualised consultation and activPAL™ feedback to reduce sedentary time in older Scottish adults: results of a feasibility and pilot study.

    PubMed

    Fitzsimons, Claire F; Kirk, Alison; Baker, Graham; Michie, Fraser; Kane, Catherine; Mutrie, Nanette

    2013-11-01

    Sedentary behaviours have been linked to poor health, independent of physical activity levels. The objective of this study was to explore an individualised intervention strategy aimed at reducing sedentary behaviours in older Scottish adults. This feasibility and pilot study was a pre-experimental (one group pretest-posttest) study design. Participants were enrolled into the study in January-March 2012 and data analysis was completed April-October 2012. The study was based in Glasgow, Scotland. Participants received an individualised consultation targeting sedentary behaviour incorporating feedback from an activPAL activity monitor. Outcome measures were objectively (activPAL) and subjectively measured (Sedentary Behaviour Questionnaire) sedentary time. Twenty four participants received the intervention. Objectively measured total time spent sitting/lying was reduced by 24 min/day (p=0.042), a reduction of 2.2%. Total time spent in stepping activities, such as walking increased by 13 min/day (p=0.044). Self-report data suggested participants achieved behaviour change by reducing time spent watching television and/or using motorised transport. Interventions to reduce sedentary behaviours in older people are urgently needed. The results of this feasibility and pilot study suggest a consultation approach may help individuals reduce time spent in sedentary behaviours. A larger, controlled trial is warranted with a diverse sample to increase generalisability. © 2013.

  18. Laser-desorption tandem time-of-flight mass spectrometry with continuous liquid introduction

    NASA Astrophysics Data System (ADS)

    Williams, Evan R.; Jones, Glenn C., Jr.; Fang, LiLing; Nagata, Takeshi; Zare, Richard N.

    1992-05-01

    A new method to combine aqueous sample introduction with matrix assisted laser desorption mass spectrometry (MS) for interfacing liquid-chromatographic techniques, such as capillary electrophoresis, to MS is described. Aqueous sample solution is introduced directly into the ion source of a time-of-. flight (TOF) mass spectrometer through a fused silica capillary; evaporative cooling results in ice formation at the end of the capillary. The ice can be made to extrude continuously by using localized resistive heating. With direct laser desorption, molecular ions from proteins as large as bovine insulin (5734 Da) can be produced. Two-step desorption/photoionization with a variety of wavelengths is demonstrated, and has the advantages of improved resolution and shot-to-shot reproducibility. Ion structural information is obtained using surface-induced dissociation with an in-line collision device in the reflectron mirror of the TOF instrument. Product ion resolution of ~70 is obtained at m/z77. Extensive fragmentation can be produced with dissociation efficiencies between 7-15% obtained for molecular ions of small organic molecules. Efficiencies approaching 30% are obtained for larger peptide ions.

  19. Flash-Bang Detector to Model the Attenuation of High-Energy Photons

    NASA Astrophysics Data System (ADS)

    Pagsanjan, N., III; Kelley, N. A.; Smith, D. M.; Sample, J. G.

    2015-12-01

    It has been known for years that lightning and thunderstorms produce gamma rays and x-rays. Terrestrial gamma-ray flashes (TGFs) are extremely bright bursts of gamma rays originating from thunderstorms. X-ray stepped leaders are bursts of x-rays coming from the lightning channel. It is known that the attenuation of these high-energy photons is a function of distance, losing energy and intensity at larger distances. To complement gamma-ray detectors on the ground it would be useful to measure the distance to the flash. Knowing the distance would allow for the true source fluence of gamma rays or x-rays to be modeled. A flash-bang detector, which uses a micro-controller, a photodiode, a microphone and temperature sensor will be able to detect the times at which lightning and thunder occurs. Knowing the speed of sound as function of temperature and the time difference between the flash and the thunder, the range to the lightning can be calculated. We will present the design of our detector as well as some preliminary laboratory test results.

  20. Future care planning: a first step to palliative care for all patients with advanced heart disease.

    PubMed

    Denvir, M A; Murray, S A; Boyd, K J

    2015-07-01

    Palliative care is recommended for patients with end-stage heart failure with several recent, randomised trials showing improvements in symptoms and quality of life and more studies underway. Future care planning provides a framework for discussing a range of palliative care problems with patients and their families. This approach can be introduced at any time during the patient's journey of care and ideally well in advance of end-of-life care. Future care planning is applicable to a wide range of patients with advanced heart disease and could be delivered systematically by cardiology teams at the time of an unplanned hospital admission, akin to cardiac rehabilitation for myocardial infarction. Integrating cardiology care and palliative care can benefit many patients with advanced heart disease at increased risk of death or hospitalisation. Larger, randomised trials are needed to assess the impact on patient outcomes and experiences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. Algorithm for Training a Recurrent Multilayer Perceptron

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.

    2004-01-01

    An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.

  2. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  3. Facile Synthesis of Flower-Like Copper-Cobalt Sulfide as Binder-Free Faradaic Electrodes for Supercapacitors with Improved Electrochemical Properties

    PubMed Central

    Wang, Tianlei; Liu, Meitang; Ma, Hongwen

    2017-01-01

    Supercapacitors have been one of the highest potential candidates for energy storage because of their significant advantages beyond rechargeable batteries in terms of large power density, short recharging time, and long cycle lifespan. In this work, Cu–Co sulfides with uniform flower-like structure have been successfully obtained via a traditional two-step hydrothermal method. The as-fabricated Cu–Co sulfide vulcanized from precursor (P–Cu–Co sulfide) is able to deliver superior specific capacitance of 592 F g−1 at 1 A g−1 and 518 F g−1 at 10 A g−1 which are surprisingly about 1.44 times and 2.39 times higher than those of Cu–Co oxide electrode, respectively. At the same time, excellent cycling stability of P–Cu–Co sulfide is indicated by 90.4% capacitance retention at high current density of 10 A g−1 after 3000 cycles. Because of the introduction of sulfur during the vulcanization process, these new developed sulfides can get more flexible structure and larger reaction surface area, and will own richer redox reaction sites between the interfaces of active material/electrolyte. The uniform flower-like P–Cu–Co sulfide electrode materials will have more potential alternatives for oxides electrode materials in the future. PMID:28590417

  4. Facile Synthesis of Flower-Like Copper-Cobalt Sulfide as Binder-Free Faradaic Electrodes for Supercapacitors with Improved Electrochemical Properties.

    PubMed

    Wang, Tianlei; Liu, Meitang; Ma, Hongwen

    2017-06-07

    Supercapacitors have been one of the highest potential candidates for energy storage because of their significant advantages beyond rechargeable batteries in terms of large power density, short recharging time, and long cycle lifespan. In this work, Cu-Co sulfides with uniform flower-like structure have been successfully obtained via a traditional two-step hydrothermal method. The as-fabricated Cu-Co sulfide vulcanized from precursor (P-Cu-Co sulfide) is able to deliver superior specific capacitance of 592 F g -1 at 1 A g -1 and 518 F g -1 at 10 A g -1 which are surprisingly about 1.44 times and 2.39 times higher than those of Cu-Co oxide electrode, respectively. At the same time, excellent cycling stability of P-Cu-Co sulfide is indicated by 90.4% capacitance retention at high current density of 10 A g -1 after 3000 cycles. Because of the introduction of sulfur during the vulcanization process, these new developed sulfides can get more flexible structure and larger reaction surface area, and will own richer redox reaction sites between the interfaces of active material/electrolyte. The uniform flower-like P-Cu-Co sulfide electrode materials will have more potential alternatives for oxides electrode materials in the future.

  5. Exploring cued and non-cued motor imagery interventions in people with multiple sclerosis: a randomised feasibility trial and reliability study.

    PubMed

    Seebacher, Barbara; Kuisma, Raija; Glynn, Angela; Berger, Thomas

    2018-01-01

    Motor imagery (MI) is increasingly used in neurorehabilitation to facilitate motor performance. Our previous study results demonstrated significantly improved walking after rhythmic-cued MI in people with multiple sclerosis (pwMS). The present feasibility study was aimed to obtain preliminary information of changes in walking, fatigue, quality of life (QoL) and MI ability following cued and non-cued MI in pwMS. The study further investigated the feasibility of a larger study and examined the reliability of a two-dimensional gait analysis system. At the MS-Clinic, Department of Neurology, Medical University of Innsbruck, Austria, 15 adult pwMS (1.5-4.5 on the Expanded Disability Status Scale, 13 females) were randomised to one of three groups: 24 sessions of 17 min of MI with music and verbal cueing (MVMI), with music alone (MMI), or non-cued (MI). Descriptive statistics were reported for all outcomes. Primary outcomes were walking speed (Timed 25-Foot Walk) and walking distance (6-Minute Walk Test). Secondary outcomes were recruitment rate, retention, adherence, acceptability, adverse events, MI ability (Kinaesthetic and Visual Imagery Questionnaire, Time-Dependent MI test), fatigue (Modified Fatigue Impact Scale) and QoL (Multiple Sclerosis Impact Scale-29). The reliability of a gait analysis system used to assess gait synchronisation with music beat was tested. Participants showed adequate MI abilities. Post-intervention, improvements in walking speed, walking distance, fatigue, QoL and MI ability were observed in all groups. Success of the feasibility criteria was demonstrated by recruitment and retention rates of 8.6% (95% confidence interval, CI 5.2, 13.8%) and 100% (95% CI 76.4, 100%), which exceeded the target rates of 5.7% and 80%. Additionally, the 83% (95% CI 0.42, 0.99) adherence rate surpassed the 67% target rate. Intra-rater reliability analysis of the gait measurement instruments demonstrated excellent Intra-Class Correlation coefficients for step length of 0.978 (95% CI 0.973, 0.982) and step time of 0.880 (95% CI 0.855, 0.902). Results from our study suggest that cued and non-cued MI are valuable interventions in pwMS who were able to imagine movements. A larger study appears feasible, however, substantial improvements to the methods are required such as stratified randomisation using a computer-generated sequence and blinding of the assessors. ISRCTN ISRCTN92351899. Registered 10 December 2015.

  6. A hybrid predictive model for acoustic noise in urban areas based on time series analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine

    2017-06-01

    The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.

  7. Loss of balance during balance beam walking elicits a multifocal theta band electrocortical response

    PubMed Central

    Gwin, Joseph T.; Makeig, Scott; Ferris, Daniel P.

    2013-01-01

    Determining the neural correlates of loss of balance during walking could lead to improved clinical assessment and treatment for individuals predisposed to falls. We used high-density electroencephalography (EEG) combined with independent component analysis (ICA) to study loss of balance during human walking. We examined 26 healthy young subjects performing heel-to-toe walking on a treadmill-mounted balance beam as well as walking on the treadmill belt (both at 0.22 m/s). ICA identified clusters of electrocortical EEG sources located in or near anterior cingulate, anterior parietal, superior dorsolateral-prefrontal, and medial sensorimotor cortex that exhibited significantly larger mean spectral power in the theta band (4–7 Hz) during walking on the balance beam compared with treadmill walking. Left and right sensorimotor cortex clusters produced significantly less power in the beta band (12–30 Hz) during walking on the balance beam compared with treadmill walking. For each source cluster, we also computed a normalized mean time/frequency spectrogram time locked to the gait cycle during loss of balance (i.e., when subjects stepped off the balance beam). All clusters except the medial sensorimotor cluster exhibited a transient increase in theta band power during loss of balance. Cluster spectrograms demonstrated that the first electrocortical indication of impending loss of balance occurred in the left sensorimotor cortex at the transition from single support to double support prior to stepping off the beam. These findings provide new insight into the neural correlates of walking balance control and could aid future studies on elderly individuals and others with balance impairments. PMID:23926037

  8. Loss of balance during balance beam walking elicits a multifocal theta band electrocortical response.

    PubMed

    Sipp, Amy R; Gwin, Joseph T; Makeig, Scott; Ferris, Daniel P

    2013-11-01

    Determining the neural correlates of loss of balance during walking could lead to improved clinical assessment and treatment for individuals predisposed to falls. We used high-density electroencephalography (EEG) combined with independent component analysis (ICA) to study loss of balance during human walking. We examined 26 healthy young subjects performing heel-to-toe walking on a treadmill-mounted balance beam as well as walking on the treadmill belt (both at 0.22 m/s). ICA identified clusters of electrocortical EEG sources located in or near anterior cingulate, anterior parietal, superior dorsolateral-prefrontal, and medial sensorimotor cortex that exhibited significantly larger mean spectral power in the theta band (4-7 Hz) during walking on the balance beam compared with treadmill walking. Left and right sensorimotor cortex clusters produced significantly less power in the beta band (12-30 Hz) during walking on the balance beam compared with treadmill walking. For each source cluster, we also computed a normalized mean time/frequency spectrogram time locked to the gait cycle during loss of balance (i.e., when subjects stepped off the balance beam). All clusters except the medial sensorimotor cluster exhibited a transient increase in theta band power during loss of balance. Cluster spectrograms demonstrated that the first electrocortical indication of impending loss of balance occurred in the left sensorimotor cortex at the transition from single support to double support prior to stepping off the beam. These findings provide new insight into the neural correlates of walking balance control and could aid future studies on elderly individuals and others with balance impairments.

  9. Divestiture: strategy's missing link.

    PubMed

    Dranikoff, Lee; Koller, Tim; Schneider, Antoon

    2002-05-01

    Although most companies dedicate considerable time and attention to acquiring and creating businesses, few devote much effort to divestitures. But regularly divesting businesses--even good, healthy ones--ensures that remaining units reach their potential and that the overall company grows stronger. Drawing on extensive research into corporate performance over the last decade, McKinsey consultants Lee Dranikoff, Tim Koller, and Antoon Schneider show that an active divestiture strategy is essential to a corporation's long-term health and profitability. In particular, they say that companies that actively manage their businesses through acquisitions and divestitures create substantially more shareholder value than those that passively hold on to their businesses. Therefore, companies should avoid making divestitures only in response to pressure and instead make them part of a well-thought-out strategy. This article presents a five-step process for doing just that: prepare the organization, identify the best candidates for divestiture, execute the best deal, communicate the decision, and create new businesses. As the fifth step suggests, divestiture is not an end in itself. Rather, it is a means to a larger end: building a company that can grow and prosper over the long haul. Wise executives divest so that they can create new businesses and expand existing ones. All of the funds, management time, and support-function capacity that a divestiture frees up should therefore be reinvested in creating shareholder value. In some cases, this will mean returning money to shareholders. But more likely than not, it will mean investing in attractive growth opportunities. In companies as in the marketplace, creation and destruction go hand in hand; neither flourishes without the other.

  10. Comparative Outcomes Between Step-Cut Lengthening Calcaneal Osteotomy vs Traditional Evans Osteotomy for Stage IIB Adult-Acquired Flatfoot Deformity.

    PubMed

    Saunders, Stuart M; Ellis, Scott J; Demetracopoulos, Constantine A; Marinescu, Anca; Burkett, Jayme; Deland, Jonathan T

    2018-01-01

    The forefoot abduction component of the flexible adult-acquired flatfoot can be addressed with lengthening of the anterior process of the calcaneus. We hypothesized that the step-cut lengthening calcaneal osteotomy (SLCO) would decrease the incidence of nonunion, lead to improvement in clinical outcome scores, and have a faster time to healing compared with the traditional Evans osteotomy. We retrospectively reviewed 111 patients (143 total feet: 65 Evans, 78 SLCO) undergoing stage IIB reconstruction followed clinically for at least 2 years. Preoperative and postoperative radiographs were analyzed for the amount of deformity correction. Computed tomography (CT) was used to analyze osteotomy healing. The Foot and Ankle Outcome Scores (FAOS) and lateral pain surveys were used to assess clinical outcomes. Mann-Whitney U tests were used to assess nonnormally distributed data while χ 2 and Fisher exact tests were used to analyze categorical variables (α = 0.05 significant). The Evans group used a larger graft size ( P < .001) and returned more often for hardware removal ( P = .038) than the SLCO group. SLCO union occurred at a mean of 8.77 weeks ( P < .001), which was significantly lower compared with the Evans group ( P = .02). The SLCO group also had fewer nonunions ( P = .016). FAOS scores improved equivalently between the 2 groups. Lateral column pain, ability to exercise, and ambulation distance were similar between groups. Following SLCO, patients had faster healing times and fewer nonunions, similar outcomes scores, and equivalent correction of deformity. SLCO is a viable technique for lateral column lengthening. Level III, retrospective cohort study.

  11. Evidence for Defect-Mediated Tunneling in Hexagonal Boron Nitride-Based Junctions.

    PubMed

    Chandni, U; Watanabe, K; Taniguchi, T; Eisenstein, J P

    2015-11-11

    We investigate electron tunneling through atomically thin layers of hexagonal boron nitride (hBN). Metal (Cr/Au) and semimetal (graphite) counter-electrodes are employed. While the direct tunneling resistance increases nearly exponentially with barrier thickness as expected, the thicker junctions also exhibit clear signatures of Coulomb blockade, including strong suppression of the tunnel current around zero bias and step-like features in the current at larger biases. The voltage separation of these steps suggests that single-electron charging of nanometer-scale defects in the hBN barrier layer are responsible for these signatures. We find that annealing the metal-hBN-metal junctions removes these defects and the Coulomb blockade signatures in the tunneling current.

  12. Measuring the effects of a visual or auditory Stroop task on dual-task costs during obstacle crossing.

    PubMed

    Worden, Timothy A; Mendes, Matthew; Singh, Pratham; Vallis, Lori Ann

    2016-10-01

    Successful planning and execution of motor strategies while concurrently performing a cognitive task has been previously examined, but unfortunately the varied and numerous cognitive tasks studied has limited our fundamental understanding of how the central nervous system successfully integrates and executes these tasks simultaneously. To gain a better understanding of these mechanisms we used a set of cognitive tasks requiring similar central executive function processes and response outputs but requiring different perceptual mechanisms to perform the motor task. Thirteen healthy young adults (20.6±1.6years old) were instrumented with kinematic markers (60Hz) and completed 5 practice, 10 single-task obstacle walking trials and two 40 trial experimental blocks. Each block contained 20 trials of seated (single-task) trials followed by 20 cognitive and obstacle (30% lower leg length) crossing trials (dual-task). Blocks were randomly presented and included either an auditory Stroop task (AST; central interference only) or a visual Stroop task (VST; combined central and structural interference). Higher accuracy rates and shorter response times were observed for the VST versus AST single-task trials (p<0.05). Conversely, for the obstacle stepping performance, larger dual task costs were observed for the VST as compared to the AST for clearance measures (the VST induced larger clearance values for both the leading and trailing feet), indicating VST tasks caused greater interference for obstacle crossing (p<0.05). These results supported the hypothesis that structural interference has a larger effect on motor performance in a dual-task situation compared to cognitive tasks that pose interference at only the central processing stage. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Ancient numerical daemons of conceptual hydrological modeling: 1. Fidelity and efficiency of time stepping schemes

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Kavetski, Dmitri

    2010-10-01

    A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.

  14. Microfluidic step-emulsification in axisymmetric geometry.

    PubMed

    Chakraborty, I; Ricouvier, J; Yazhgur, P; Tabeling, P; Leshansky, A M

    2017-10-25

    Biphasic step-emulsification (Z. Li et al., Lab Chip, 2015, 15, 1023) is a promising microfluidic technique for high-throughput production of μm and sub-μm highly monodisperse droplets. The step-emulsifier consists of a shallow (Hele-Shaw) microchannel operating with two co-flowing immiscible liquids and an abrupt expansion (i.e., step) to a deep and wide reservoir. Under certain conditions the confined stream of the disperse phase, engulfed by the co-flowing continuous phase, breaks into small highly monodisperse droplets at the step. Theoretical investigation of the corresponding hydrodynamics is complicated due to the complex geometry of the planar device, calling for numerical approaches. However, direct numerical simulations of the three dimensional surface-tension-dominated biphasic flows in confined geometries are computationally expensive. In the present paper we study a model problem of axisymmetric step-emulsification. This setup consists of a stable core-annular biphasic flow in a cylindrical capillary tube connected co-axially to a reservoir tube of a larger diameter through a sudden expansion mimicking the edge of the planar step-emulsifier. We demonstrate that the axisymmetric setup exhibits similar regimes of droplet generation to the planar device. A detailed parametric study of the underlying hydrodynamics is feasible via inexpensive (two dimensional) simulations owing to the axial symmetry. The phase diagram quantifying the different regimes of droplet generation in terms of governing dimensionless parameters is presented. We show that in qualitative agreement with experiments in planar devices, the size of the droplets generated in the step-emulsification regime is independent of the capillary number and almost insensitive to the viscosity ratio. These findings confirm that the step-emulsification regime is solely controlled by surface tension. The numerical predictions are in excellent agreement with in-house experiments with the axisymmetric step-emulsifier.

  15. Two-step adaptive management for choosing between two management actions

    USGS Publications Warehouse

    Moore, Alana L.; Walker, Leila; Runge, Michael C.; McDonald-Madden, Eve; McCarthy, Michael A

    2017-01-01

    Adaptive management is widely advocated to improve environmental management. Derivations of optimal strategies for adaptive management, however, tend to be case specific and time consuming. In contrast, managers might seek relatively simple guidance, such as insight into when a new potential management action should be considered, and how much effort should be expended on trialing such an action. We constructed a two-time-step scenario where a manager is choosing between two possible management actions. The manager has a total budget that can be split between a learning phase and an implementation phase. We use this scenario to investigate when and how much a manager should invest in learning about the management actions available. The optimal investment in learning can be understood intuitively by accounting for the expected value of sample information, the benefits that accrue during learning, the direct costs of learning, and the opportunity costs of learning. We find that the optimal proportion of the budget to spend on learning is characterized by several critical thresholds that mark a jump from spending a large proportion of the budget on learning to spending nothing. For example, as sampling variance increases, it is optimal to spend a larger proportion of the budget on learning, up to a point: if the sampling variance passes a critical threshold, it is no longer beneficial to invest in learning. Similar thresholds are observed as a function of the total budget and the difference in the expected performance of the two actions. We illustrate how this model can be applied using a case study of choosing between alternative rearing diets for hihi, an endangered New Zealand passerine. Although the model presented is a simplified scenario, we believe it is relevant to many management situations. Managers often have relatively short time horizons for management, and might be reluctant to consider further investment in learning and monitoring beyond collecting data from a single time period.

  16. Two-step adaptive management for choosing between two management actions.

    PubMed

    Moore, Alana L; Walker, Leila; Runge, Michael C; McDonald-Madden, Eve; McCarthy, Michael A

    2017-06-01

    Adaptive management is widely advocated to improve environmental management. Derivations of optimal strategies for adaptive management, however, tend to be case specific and time consuming. In contrast, managers might seek relatively simple guidance, such as insight into when a new potential management action should be considered, and how much effort should be expended on trialing such an action. We constructed a two-time-step scenario where a manager is choosing between two possible management actions. The manager has a total budget that can be split between a learning phase and an implementation phase. We use this scenario to investigate when and how much a manager should invest in learning about the management actions available. The optimal investment in learning can be understood intuitively by accounting for the expected value of sample information, the benefits that accrue during learning, the direct costs of learning, and the opportunity costs of learning. We find that the optimal proportion of the budget to spend on learning is characterized by several critical thresholds that mark a jump from spending a large proportion of the budget on learning to spending nothing. For example, as sampling variance increases, it is optimal to spend a larger proportion of the budget on learning, up to a point: if the sampling variance passes a critical threshold, it is no longer beneficial to invest in learning. Similar thresholds are observed as a function of the total budget and the difference in the expected performance of the two actions. We illustrate how this model can be applied using a case study of choosing between alternative rearing diets for hihi, an endangered New Zealand passerine. Although the model presented is a simplified scenario, we believe it is relevant to many management situations. Managers often have relatively short time horizons for management, and might be reluctant to consider further investment in learning and monitoring beyond collecting data from a single time period. © 2017 by the Ecological Society of America.

  17. Comparing an annual and daily time-step model for predicting field-scale phosphorus loss

    USDA-ARS?s Scientific Manuscript database

    Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...

  18. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis

    PubMed Central

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls. PMID:28700633

  19. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis.

    PubMed

    Chidori, Kazuhiro; Yamamoto, Yuji

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.

  20. A high-order relaxation method with projective integration for solving nonlinear systems of hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Lafitte, Pauline; Melis, Ward; Samaey, Giovanni

    2017-07-01

    We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.

  1. The Synthesis of N-Acetyllactosamine Functionalized Dendrimers, and the Functionalization of Silica Surfaces Using Tunable Dendrons and beta-Cyclodextrins

    NASA Astrophysics Data System (ADS)

    Ennist, Jessica Helen

    Galectin-3 is beta-galactoside binding protein which is found in many healthy cells. In cancer, the galectin-3/tumor-associated Thomsen-Friedenreich antigen (TF antigen) interaction has been implicated in heterotypic and homotypic cellular adhesion and apoptotic signaling pathways. However, a stronger mechanistic understanding of the role of galectin-3 in these processes is needed. N-acetyllactosamine (LacNAc) is a non-native ligand for galectin-3 which binds with comparable affinity to the TF antigen and therefore an important ligand to study galectin-3 mediated processes. To study galectin-3 mediated homotypic cellular aggregation, four generations of polyamidoamine (PAMAM) dendrimers were functionalized with N-acetyllactosamine using a four-step chemoenzymatic route. The enzymatic step controlled the regiochemistry of the galactose addition to N-acetylglucosamine functionalized dendrimers using a recombinant beta-1,4-Galactosyltransferase-/UDP-4'-Gal Epimerase Fusion Protein (lgtB-galE). Homotypic cellular aggregation, which is promoted by the presence of galectin-3 as it binds to glycosides at the cell surface, was studied using HT-1080 fibrosarcoma, A549 lung, and DU-145 prostate cancer cell lines. In the presence of small LacNAc functionalized PAMAM dendrimers, galectin-3 induced cancer cellular aggregation was inhibited. However, the larger glycodendrimers induced homotypic cellular aggregation. Additionally, novel poly(aryl ether) dendronized silica surfaces designed for reversible adsorbtion of targeted analytes were synthesized, and characterization using X-ray Photoelectron Spectroscopy (XPS) was performed. Using a Cu(I) mediated cycloaddition "click" reaction, beta-cyclodextrin was appended to dendronized surfaces via triazole formation and also to a non-dendronized surface for comparison purposes. First generation G(1) dendrons have more than 6 times greater capacity to adsorb targeted analytes than slides functionalized with monomeric beta-cyclodextrin and are 2 times greater than slides functionalized with larger generation dendrons. This study reported beta-cyclodextrin functionalized surfaces can undergo a triggered release of the adsorbent, but otherwise retained the targeted analyte through multiple aqueous washes. Therefore, a new generation of G(1) dendronized surfaces capable of reversible adsorption were developed by heterogeneously appending sulfonic acid/pyridine end-groups. Auger Electron Spectroscopy (AES) was used to quantify the ratio of groups installed. Furthermore, G(1) dendronized surfaces were functionalized homogenously with sulfonic acid and pyridine for comparison and with chiral amino acids for chiral recognition studies.

  2. Comparison of pedometer and accelerometer measures of physical activity during preschool time on 3- to 5-year-old children.

    PubMed

    Pagels, Peter; Boldemann, Cecilia; Raustorp, Anders

    2011-01-01

    To compare pedometer steps with accelerometer counts and to analyse minutes of engagement in light, moderate and vigorous physical activity in 3- to 5-year-old children during preschool time. Physical activity was recorded during preschool time for five consecutive days in 55 three- to five-year-old children. The children wore a Yamax SW200 pedometer and an Actigraph GTIM Monitor. The average time spent at preschool was 7.22 h/day with an average step of 7313 (±3042). Steps during preschool time increased with increasing age. The overall correlation between mean step counts and mean accelerometer counts (r = 0.67, p < 0.001), as well as time in light to vigorous activity (r = 0.76, p < 0.001), were moderately high. Step counts and moderate to vigorous physical activity minutes were poorly correlated in 3 years old (r = 0.19, p < 0.191) and moderately correlated (r = 0.50, p < 0.001) for children 4 to 5 years old. Correlation between the preschool children's pedometer-determined step counts and total engagement in physical activity during preschool time was moderately high. Children's step counts at preschool were low, and the time spent in moderate and vigorous physical activity at preschool was very short. © 2010 The Author(s)/Journal Compilation © 2010 Foundation Acta Paediatrica.

  3. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  4. Spatial and Temporal Control Contribute to Step Length Asymmetry during Split-Belt Adaptation and Hemiparetic Gait

    PubMed Central

    Finley, James M.; Long, Andrew; Bastian, Amy J.; Torres-Oviedo, Gelsy

    2014-01-01

    Background Step length asymmetry (SLA) is a common hallmark of gait post-stroke. Though conventionally viewed as a spatial deficit, SLA can result from differences in where the feet are placed relative to the body (spatial strategy), the timing between foot-strikes (step time strategy), or the velocity of the body relative to the feet (step velocity strategy). Objective The goal of this study was to characterize the relative contributions of each of these strategies to SLA. Methods We developed an analytical model that parses SLA into independent step position, step time, and step velocity contributions. This model was validated by reproducing SLA values for twenty-five healthy participants when their natural symmetric gait was perturbed on a split-belt treadmill moving at either a 2:1 or 3:1 belt-speed ratio. We then applied the validated model to quantify step position, step time, and step velocity contributions to SLA in fifteen stroke survivors while walking at their self-selected speed. Results SLA was predicted precisely by summing the derived contributions, regardless of the belt-speed ratio. Although the contributions to SLA varied considerably across our sample of stroke survivors, the step position contribution tended to oppose the other two – possibly as an attempt to minimize the overall SLA. Conclusions Our results suggest that changes in where the feet are placed or changes in interlimb timing could be used as compensatory strategies to reduce overall SLA in stroke survivors. These results may allow clinicians and researchers to identify patient-specific gait abnormalities and personalize their therapeutic approaches accordingly. PMID:25589580

  5. Effective size selection of MoS2 nanosheets by a novel liquid cascade centrifugation: Influences of the flakes dimensions on electrochemical and photoelectrochemical applications.

    PubMed

    Kajbafvala, Marzieh; Farbod, Mansoor

    2018-05-14

    Although liquid phase exfoliation is a powerful method to produce MoS 2 nanosheets in large scale, but its effectiveness is limited by the diversity of produced nanosheets sizes. Here a novel approach for separation of MoS 2 flakes having various lateral sizes and thicknesses based on the cascaded centrifugation has been introduced. This method involves a pre-separation step which is performed through low-speed centrifugation to avoid the deposition of large area single and few-layers by the heavier particles. The bulk MoS 2 powders were dispersed in an aqueous solution of sodium cholate (SC) and sonicated for 12 h. The main separation step was performed using different speed centrifugation intervals of 10-11, 8-10, 6-8, 4-6, 2-4 and 0.5-2 krpm by which nanosheets containing 2, 4, 7, 8, 14, 18 and 29 layers were obtained respectively. The samples were characterized using XRD, FESEM, AFM, TEM, DLS and also UV-vis, Raman and PL spectroscopy measurements. Dynamic light scattering (DLS) measurements have confirmed the existence of a larger number of single or few-layers MoS 2 nanosheets compared to when the pre-separation step was not used. Finally, Photocurrent and cyclic voltammetry of different samples were measured and found that the flakes with bigger surface area had larger CV loop area. Our results provide a method for the preparation of a MoS 2 monolayer enriched suspension which can be used for different applications. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Cut-off values for step count and TV viewing time as discriminators of hyperglycaemia in Brazilian children and adolescents.

    PubMed

    Gordia, Alex Pinheiro; Quadros, Teresa Maria Bianchini de; Silva, Luciana Rodrigues; Mota, Jorge

    2016-09-01

    The use of step count and TV viewing time to discriminate youngsters with hyperglycaemia is still a matter of debate. To establish cut-off values for step count and TV viewing time in children and adolescents using glycaemia as the reference criterion. A cross-sectional study was conducted on 1044 schoolchildren aged 6-18 years from Northeastern Brazil. Daily step counts were assessed with a pedometer over 1 week and TV viewing time by self-report. The area under the curve (AUC) ranged from 0.52-0.61 for step count and from 0.49-0.65 for TV viewing time. The daily step count with the highest discriminatory power for hyperglycaemia was 13 884 (sensitivity = 77.8; specificity = 51.8) for male children and 12 371 (sensitivity = 55.6; specificity = 55.5) and 11 292 (sensitivity = 57.7; specificity = 48.6) for female children and adolescents respectively. The cut-off for TV viewing time with the highest discriminatory capacity for hyperglycaemia was 3 hours/day (sensitivity = 57.7-77.8; specificity = 48.6-53.2). This study represents the first step for the development of criteria based on cardiometabolic risk factors for step count and TV viewing time in youngsters. However, the present cut-off values have limited practical application because of their poor accuracy and low sensitivity and specificity.

  7. Quantifying the global contribution of alcohol consumption to cardiomyopathy.

    PubMed

    Manthey, Jakob; Imtiaz, Sameer; Neufeld, Maria; Rylett, Margaret; Rehm, Jürgen

    2017-05-25

    The global impact of alcohol consumption on deaths due to cardiomyopathy (CM) has not been quantified to date, even though CM contains a subcategory for alcoholic CM with an effect of heavy drinking over time as the postulated underlying causal mechanism. In this feasibility study, a model to estimate the alcohol-attributable fraction (AAF) of CM deaths based on alcohol exposure measures is proposed. A two-step model was developed based on aggregate-level data from 95 countries, including the most populous (data from 2013 or last available year). First, the crude mortality rate of alcoholic CM per 1,000,000 adults was predicted using a negative binomial regression based on prevalence of alcohol use disorders (AUD) and adult alcohol per capita consumption (APC) (n = 52 countries). Second, the proportion of alcoholic CM among all CM deaths (i.e., AAF) was predicted using a fractional response probit regression with alcoholic CM crude mortality rate (from Step 1), AUD prevalence, APC per drinker, and Global Burden of Disease region as predictions. Additional models repeated these steps by sex and for the wider Global Burden of Disease study definition of CM. There were strong correlations (>0.9) between the crude mortality rate of alcoholic CM and the AAFs, supporting the modeling strategy. In the first step, the population-weighted mean crude mortality rate was estimated at 8.4 alcoholic CM deaths per 1,000,000 (95% CI: 7.4-9.3). In the second step, the global AAFs were estimated at 6.9% (95% CI: 5.4-8.4%). Sex-specific figures suggested a lower AAF among females (2.9%, 95% CI: 2.3-3.4%) as compared to males (8.9%, 95% CI: 7.0-10.7%). Larger deviations between observed and predicted AAFs were found in Eastern Europe and Central Asia. The model proposed promises to fill the gap to include AAFs for CM into comparative risk assessments in the future. These predictions likely will be underestimates because of the stigma involved in all fully alcohol-attributable conditions and subsequent problems in coding of alcoholic CM deaths.

  8. IO Sphere: The Professional Journal of Joint Information Operations. Special Edition 2008

    DTIC Science & Technology

    2008-01-01

    members, disseminate propaganda, videos , brochures, and training materials, as well as to coordinate terrorist acts in an anonymous and...collaboration among larger communities of cyber Porn versus Terror Years ago, authorities noticed that child pornography websites, though often...stepping foot on them. Moreover, video information can be analyzed by computer vision algorithms. Based on technology available today, it’s not

  9. Mental workload and cognitive task automaticity: an evaluation of subjective and time estimation metrics.

    PubMed

    Liu, Y; Wickens, C D

    1994-11-01

    The evaluation of mental workload is becoming increasingly important in system design and analysis. The present study examined the structure and assessment of mental workload in performing decision and monitoring tasks by focusing on two mental workload measurements: subjective assessment and time estimation. The task required the assignment of a series of incoming customers to the shortest of three parallel service lines displayed on a computer monitor. The subject was either in charge of the customer assignment (manual mode) or was monitoring an automated system performing the same task (automatic mode). In both cases, the subjects were required to detect the non-optimal assignments that they or the computer had made. Time pressure was manipulated by the experimenter to create fast and slow conditions. The results revealed a multi-dimensional structure of mental workload and a multi-step process of subjective workload assessment. The results also indicated that subjective workload was more influenced by the subject's participatory mode than by the factor of task speed. The time estimation intervals produced while performing the decision and monitoring tasks had significantly greater length and larger variability than those produced while either performing no other tasks or performing a well practised customer assignment task. This result seemed to indicate that time estimation was sensitive to the presence of perceptual/cognitive demands, but not to response related activities to which behavioural automaticity has developed.

  10. Water and oil wettability of anodized 6016 aluminum alloy surface

    NASA Astrophysics Data System (ADS)

    Rodrigues, S. P.; Alves, C. F. Almeida; Cavaleiro, A.; Carvalho, S.

    2017-11-01

    This paper reports on the control of wettability behaviour of a 6000 series aluminum (Al) alloy surface (Al6016-T4), which is widely used in the automotive and aerospace industries. In order to induce the surface micro-nanostructuring of the surface, a combination of prior mechanical polishing steps followed by anodization process with different conditions was used. The surface polishing with sandpaper grit size 1000 promoted aligned grooves on the surface leading to static water contact angle (WCA) of 91° and oil (α-bromonaphthalene) contact angle (OCA) of 32°, indicating a slightly hydrophobic and oleophilic character. H2SO4 and H3PO4 acid electrolytes were used to grow aluminum oxide layers (Al2O3) by anodization, working at 15 V/18° C and 100 V/0 °C, respectively, in one or two-steps configuration. Overall, the anodization results showed that the structured Al surfaces were hydrophilic and oleophilic-like with both WCA and OCA below 90°. The one-step configuration led to a dimple-shaped Al alloy surface with small diameter of around 31 nm, in case of H2SO4, and with larger diameters of around 223 nm in case of H3PO4. The larger dimples achieved with H3PO4 electrolyte allowed to reach a slight hydrophobic surface. The thicker porous Al oxide layers, produced by anodization in two-step configuration, revealed that the liquids can penetrate easily inside the non-ordered porous structures and, thus, the surface wettability tended to superhydrophilic and superoleophilic character (CA < 10°). These results indicate that the capillary-pressure balance model, described for wettability mechanisms of porous structures, was broken. Moreover, thicker oxide layers with narrow pores of about 29 nm diameter allowed to achieve WCA < OCA. This inversion in favour of the hydrophilic-oleophobic surface behaviour is of great interest either for lubrication of mechanical components or in water-oil separation process.

  11. A correlation between extensional displacement and architecture of ionic polymer transducers

    NASA Astrophysics Data System (ADS)

    Akle, Barbar J.; Duncan, Andrew; Leo, Donald J.

    2008-03-01

    Ionic polymer transducers (IPT), sometimes referred to as artificial muscles, are known to generate a large bending strain and a moderate stress at low applied voltages (<5V). Bending actuators have limited engineering applications due to the low forcing capabilities and the need for complicated external devices to convert the bending action into rotating or linear motion desired in most devices. Recently Akle and Leo reported extensional actuation in ionic polymer transducers. In this study, extensional IPTs are characterized as a function of transducer architecture. In this study 2 actuators are built and there extensional displacement response is characterized. The transducers have similar electrodes while the middle membrane in the first is a Nafion / ionic liquid and an aluminum oxide - ionic liquid in the second. The first transducer is characterized for constant current input, voltage step input, and sweep voltage input. The model prediction is in agreement in both shape and magnitude for the constant current experiment. The values of α and β used are within the range of values reported in Akle and Leo. Both experiments and model demonstrate that there is a preferred direction of applying the potential so that the transducer will exhibit large deformations. In step response the model well predicted the negative potential and the early part of the step in the positive potential and failed to predict the displacement after approximately 180s has elapsed. The model well predicted the sweep response, and the observed 1st harmonic in the displacement further confirmed the existence of a quadratic in the charge response. Finally the aluminum oxide based transducer is characterized for a step response and compared to the Nafion based transducer. The second actuator demonstrated electromechanical extensional response faster than that in the Nafion based transducer. The Aluminum oxide based transducer is expected to provide larger forces and hence larger energy density.

  12. Adsorption of xenon on vicinal copper and platinum surfaces

    NASA Astrophysics Data System (ADS)

    Baker, Layton

    The adsorption of xenon was studied on Cu(111), Cu(221), Cu(643) and on Pt(111), Pt(221), and Pt(531) using low energy electron diffraction (LEED), temperature programmed desorption (TPD) of xenon, and ultraviolet photoemission of adsorbed xenon (PAX). These experiments were performed to study the atomic and electronic structure of stepped and step-kinked, chiral metal surfaces. Xenon TPD and PAX were performed on each surface in an attempt to titrate terrace, step edge, and kink adsorption sites by adsorption energetics (TPD) and local work function differences (PAX). Due to the complex behavior of xenon on the vicinal copper and platinum metal surfaces, adsorption sites on these surfaces could not be adequately titrated by xenon TPD. On Cu(221) and Cu(643), xenon desorption from step adsorption sites was not apparent leading to the conclusion that the energy difference between terrace and step adsorption is minuscule. On Pt(221) and Pt(531), xenon TPD indicated that xenon prefers to bond at step edges and that the xenon-xenon interaction at step edges in repulsive but no further indication of step-kink adsorption was observed. The Pt(221) and Pt(531) TPD spectra indicated that the xenon overlayer undergoes strong compression near monolayer coverage on these surfaces due to repulsion between step-edge adsorbed xenon and other encroaching xenon atoms. The PAX experiments on the copper and platinum surfaces demonstrated that the step adsorption sites have lower local work functions than terrace adsorption sites and that higher step density leads to a larger separation in the local work function of terrace and step adsorption sites. The PAX spectra also indicated that, for all surfaces studied at 50--70 K, step adsorption is favored at low coverage but the step sites are not saturated until monolayer coverage is reached; this observation is due to the large entropy difference between terrace and step adsorption states and to repulsive interactions between xenon atoms adsorbed at step edges (on the platinum surfaces). The results herein provide several novel observations regarding the adsorptive behavior of xenon on vicinal copper and platinum surfaces.

  13. Communication as a Strategic Activity (Invited)

    NASA Astrophysics Data System (ADS)

    Fischhoff, B.

    2010-12-01

    Effective communication requires preparation. The first step is explicit analysis of the decisions faced by audience members, in order to identify the facts essential to their choices. The second step is assessing their current beliefs, in order to identify the gaps in their understanding, as well as their natural ways of thinking. The third step is drafting communications potentially capable of closing those gaps, taking advantage of the relevant behavioral science. The fourth step is empirically evaluating those communications, refining them as necessary. The final step is communicating through trusted channels, capable of getting the message out and receiving needed feedback. Executing these steps requires a team involving subject matter experts (for ensuring that the science is right), decision analysts (for identifying the decision-critical facts), behavioral scientists (for designing and evaluating messages), and communication specialists (for creating credible channels). Larger organizations should be able to assemble those teams and anticipate their communication needs. However, even small organizations, individuals, or large organizations that have been caught flat-footed can benefit from quickly assembling informal teams, before communicating in ways that might undermine their credibility. The work is not expensive, but does require viewing communication as a strategic activity, rather than an afterthought. The talk will illustrate the science base, with a few core research results; note the risks of miscommunication, with a few bad examples; and suggest the opportunities for communication leadership, focusing on the US Food and Drug Administration.

  14. The constant displacement scheme for tracking particles in heterogeneous aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, X.H.; Gomez-Hernandez, J.J.

    1996-01-01

    Simulation of mass transport by particle tracking or random walk in highly heterogeneous media may be inefficient from a computational point of view if the traditional constant time step scheme is used. A new scheme which adjusts automatically the time step for each particle according to the local pore velocity, so that each particle always travels a constant distance, is shown to be computationally faster for the same degree of accuracy than the constant time step method. Using the constant displacement scheme, transport calculations in a 2-D aquifer model, with nature log-transmissivity variance of 4, can be 8.6 times fastermore » than using the constant time step scheme.« less

  15. Non-Gaussian statistics of soliton timing jitter induced by amplifier noise.

    PubMed

    Ho, Keang-Po

    2003-11-15

    Based on first-order perturbation theory of the soliton, the Gordon-Haus timing jitter induced by amplifier noise is found to be non-Gaussian distributed. Both frequency and timing jitter have larger tail probabilities than Gaussian distribution given by the linearized perturbation theory. The timing jitter has a larger discrepancy from Gaussian distribution than does the frequency jitter.

  16. Role of step size and max dwell time in anatomy based inverse optimization for prostate implants

    PubMed Central

    Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha

    2013-01-01

    In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323

  17. References and benchmarks for pore-scale flow simulated using micro-CT images of porous media and digital rocks

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hofmann, Ronny; Alpak, Faruk O.; Berg, Steffen; Dietderich, Jesse; Agarwal, Umang; Tandon, Kunj; Hunter, Sander; Freeman, Justin; Wilson, Ove Bjorn

    2017-11-01

    We generate a novel reference dataset to quantify the impact of numerical solvers, boundary conditions, and simulation platforms. We consider a variety of microstructures ranging from idealized pipes to digital rocks. Pore throats of the digital rocks considered are large enough to be well resolved with state-of-the-art micro-computerized tomography technology. Permeability is computed using multiple numerical engines, 12 in total, including, Lattice-Boltzmann, computational fluid dynamics, voxel based, fast semi-analytical, and known empirical models. Thus, we provide a measure of uncertainty associated with flow computations of digital media. Moreover, the reference and standards dataset generated is the first of its kind and can be used to test and improve new fluid flow algorithms. We find that there is an overall good agreement between solvers for idealized cross-section shape pipes. As expected, the disagreement increases with increase in complexity of the pore space. Numerical solutions for pipes with sinusoidal variation of cross section show larger variability compared to pipes of constant cross-section shapes. We notice relatively larger variability in computed permeability of digital rocks with coefficient of variation (of up to 25%) in computed values between various solvers. Still, these differences are small given other subsurface uncertainties. The observed differences between solvers can be attributed to several causes including, differences in boundary conditions, numerical convergence criteria, and parameterization of fundamental physics equations. Solvers that perform additional meshing of irregular pore shapes require an additional step in practical workflows which involves skill and can introduce further uncertainty. Computation times for digital rocks vary from minutes to several days depending on the algorithm and available computational resources. We find that more stringent convergence criteria can improve solver accuracy but at the expense of longer computation time.

  18. Fast Plane Wave 2-D Vector Flow Imaging Using Transverse Oscillation and Directional Beamforming.

    PubMed

    Jensen, Jonas; Villagomez Hoyos, Carlos Armando; Stuart, Matthias Bo; Ewertsen, Caroline; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-07-01

    Several techniques can estimate the 2-D velocity vector in ultrasound. Directional beamforming (DB) estimates blood flow velocities with a higher precision and accuracy than transverse oscillation (TO), but at the cost of a high beamforming load when estimating the flow angle. In this paper, it is proposed to use TO to estimate an initial flow angle, which is then refined in a DB step. Velocity magnitude is estimated along the flow direction using cross correlation. It is shown that the suggested TO-DB method can improve the performance of velocity estimates compared with TO, and with a beamforming load, which is 4.6 times larger than for TO and seven times smaller than for conventional DB. Steered plane wave transmissions are employed for high frame rate imaging, and parabolic flow with a peak velocity of 0.5 m/s is simulated in straight vessels at beam-to-flow angles from 45° to 90°. The TO-DB method estimates the angle with a bias and standard deviation (SD) less than 2°, and the SD of the velocity magnitude is less than 2%. When using only TO, the SD of the angle ranges from 2° to 17° and for the velocity magnitude up to 7%. Bias of the velocity magnitude is within 2% for TO and slightly larger but within 4% for TO-DB. The same trends are observed in measurements although with a slightly larger bias. Simulations of realistic flow in a carotid bifurcation model provide visualization of complex flow, and the spread of velocity magnitude estimates is 7.1 cm/s for TO-DB, while it is 11.8 cm/s using only TO. However, velocities for TO-DB are underestimated at peak systole as indicated by a regression value of 0.97 for TO and 0.85 for TO-DB. An in vivo scanning of the carotid bifurcation is used for vector velocity estimations using TO and TO-DB. The SD of the velocity profile over a cardiac cycle is 4.2% for TO and 3.2% for TO-DB.

  19. The role of particle jamming on the formation and stability of step-pool morphology: insight from a reduced-complexity model

    NASA Astrophysics Data System (ADS)

    Saletti, M.; Molnar, P.; Hassan, M. A.

    2017-12-01

    Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and internal geomorphic adjustment (e.g. jamming) on the response of step-pool streams, showing the potential of reduced-complexity models in fluvial geomorphology.

  20. Simplified Two-Time Step Method for Calculating Combustion and Emission Rates of Jet-A and Methane Fuel With and Without Water Injection

    NASA Technical Reports Server (NTRS)

    Molnar, Melissa; Marek, C. John

    2005-01-01

    A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.

Top