Sample records for large time step

  1. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  2. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  3. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  4. Newmark local time stepping on high-performance computing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less

  5. Efficient Control Law Simulation for Multiple Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.

    1998-10-06

    In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less

  6. An adaptive time-stepping strategy for solving the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk

    2013-09-15

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less

  7. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  8. On improving the iterative convergence properties of an implicit approximate-factorization finite difference algorithm. [considering transonic flow

    NASA Technical Reports Server (NTRS)

    Desideri, J. A.; Steger, J. L.; Tannehill, J. C.

    1978-01-01

    The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.

  9. A Hybrid, Large-Scale Wireless Sensor Network for Real-Time Acquisition and Tracking

    DTIC Science & Technology

    2007-06-01

    multicolor, Quantum Well Infrared Photodetector ( QWIP ), step-stare, large-format Focal Plane Array (FPA) is proposed and evaluated through performance...Photodetector ( QWIP ), step-stare, large-format Focal Plane Array (FPA) is proposed and evaluated through performance analysis. The thesis proposes...7 1. Multi-color IR Sensors - Operational Advantages ...........................8 2. Quantum-Well IR Photodetector ( QWIP

  10. Operational flood control of a low-lying delta system using large time step Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Tian, Xin; van Overloop, Peter-Jules; Negenborn, Rudy R.; van de Giesen, Nick

    2015-01-01

    The safety of low-lying deltas is threatened not only by riverine flooding but by storm-induced coastal flooding as well. For the purpose of flood control, these deltas are mostly protected in a man-made environment, where dikes, dams and other adjustable infrastructures, such as gates, barriers and pumps are widely constructed. Instead of always reinforcing and heightening these structures, it is worth considering making the most of the existing infrastructure to reduce the damage and manage the delta in an operational and overall way. In this study, an advanced real-time control approach, Model Predictive Control, is proposed to operate these structures in the Dutch delta system (the Rhine-Meuse delta). The application covers non-linearity in the dynamic behavior of the water system and the structures. To deal with the non-linearity, a linearization scheme is applied which directly uses the gate height instead of the structure flow as the control variable. Given the fact that MPC needs to compute control actions in real-time, we address issues regarding computational time. A new large time step scheme is proposed in order to save computation time, in which different control variables can have different control time steps. Simulation experiments demonstrate that Model Predictive Control with the large time step setting is able to control a delta system better and much more efficiently than the conventional operational schemes.

  11. A new theory for multistep discretizations of stiff ordinary differential equations: Stability with large step sizes

    NASA Technical Reports Server (NTRS)

    Majda, G.

    1985-01-01

    A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.

  12. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.

  13. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  14. Stability analysis of implicit time discretizations for the Compton-scattering Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.

    2009-09-01

    The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.

  15. Where did the time go? Friction evolves with slip following large velocity steps, normal stress steps, and (?) during long holds

    NASA Astrophysics Data System (ADS)

    Rubin, A. M.; Bhattacharya, P.; Tullis, T. E.; Okazaki, K.; Beeler, N. M.

    2016-12-01

    The popular constitutive formulations of rate-and-state friction offer two end-member views on whether friction evolves only with slip (Slip law state evolution) or with time even without slip (Aging law state evolution). While rate stepping experiments show support for the Slip law, laboratory observed frictional behavior of initially bare rock surfaces near zero slip rate has traditionally been interpreted to show support for time-dependent evolution of frictional strength. Such laboratory derived support for time-dependent evolution has been one of the motivations behind the Aging law being widely used to model earthquake cycles on natural faults.Through a combination of theoretical results and new experimental data on initially bare granite, we show stronger support for the other end member view, i.e. that friction under a wide range of sliding conditions evolves only with slip. Our dataset is unique in that it combines up to 3.5 orders of magnitude rate steps, sequences of holds up to 10000s, and 5% normal stress steps at order of magnitude different sliding rates during the same experimental run. The experiments were done on the Brown rotary shear apparatus using servo feedback, making the machine stiff enough to provide very large departures from steady-state while maintaining stable, quasi-static sliding. Across these diverse sliding conditions, and in particular for both large velocity step decreases and the longest holds, the data are much more consistent with the Slip law version of slip-dependence than the time-dependence formulated in the Aging law. The shear stress response to normal stress steps is also consistently better explained by the Slip law when paired with the Linker-Dieterich type response to normal stress perturbations. However, the remarkable symmetry and slip-dependence of the normal stress step increases and decreases suggest deficiencies in the Linker-Dieterich formulation that we will probe in future experiments.High quality measurements of interface compaction from the normal-stress steps suggest that the instantaneous changes in state and contact area are opposite in sign, indicating that state evolution might be fundamentally connected to contact quality, and not quantity alone.

  16. The large discretization step method for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  17. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Asynchronous adaptive time step in quantitative cellular automata modeling

    PubMed Central

    Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan

    2004-01-01

    Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901

  19. Modifications to WRF's dynamical core to improve the treatment of moisture for large-eddy simulations: WRF DY-CORE MOISTURE TREATMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Heng; Endo, Satoshi; Wong, May

    Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in the acoustic sub­stepping portionmore » of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub­steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub­steps) are eliminated in both of the example stratocumulus cases. This modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  20. Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations

    DOE PAGES

    Xiao, Heng; Endo, Satoshi; Wong, May; ...

    2015-10-29

    Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less

  1. A Cascaded Approach for Correcting Ionospheric Contamination with Large Amplitude in HF Skywave Radars

    PubMed Central

    Wei, Yinsheng; Guo, Rujiang; Xu, Rongqing; Tang, Xiudong

    2014-01-01

    Ionospheric phase perturbation with large amplitude causes broadening sea clutter's Bragg peaks to overlap each other; the performance of traditional decontamination methods about filtering Bragg peak is poor, which greatly limits the detection performance of HF skywave radars. In view of the ionospheric phase perturbation with large amplitude, this paper proposes a cascaded approach based on improved S-method to correct the ionospheric phase contamination. This approach consists of two correction steps. At the first step, a time-frequency distribution method based on improved S-method is adopted and an optimal detection method is designed to obtain a coarse ionospheric modulation estimation from the time-frequency distribution. At the second correction step, based on the phase gradient algorithm (PGA) is exploited to eliminate the residual contamination. Finally, use the measured data to verify the effectiveness of the method. Simulation results show the time-frequency resolution of this method is high and is not affected by the interference of the cross term; ionospheric phase perturbation with large amplitude can be corrected in low signal-to-noise (SNR); such a cascade correction method has a good effect. PMID:24578656

  2. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    NASA Astrophysics Data System (ADS)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  3. Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Martini, Michael C.

    2011-01-01

    A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.

  4. Large time-step stability of explicit one-dimensional advection schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.

  5. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  6. Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik

    Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less

  7. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  8. On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation

    NASA Astrophysics Data System (ADS)

    Qian, ZhanSen; Lee, Chun-Hian

    2012-08-01

    A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.

  9. Biomechanical influences on balance recovery by stepping.

    PubMed

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  10. Analysis of aggregated tick returns: Evidence for anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Weber, Philipp

    2007-01-01

    In order to investigate the origin of large price fluctuations, we analyze stock price changes of ten frequently traded NASDAQ stocks in the year 2002. Though the influence of the trading frequency on the aggregate return in a certain time interval is important, it cannot alone explain the heavy-tailed distribution of stock price changes. For this reason, we analyze intervals with a fixed number of trades in order to eliminate the influence of the trading frequency and investigate the relevance of other factors for the aggregate return. We show that in tick time the price follows a discrete diffusion process with a variable step width while the difference between the number of steps in positive and negative direction in an interval is Gaussian distributed. The step width is given by the return due to a single trade and is long-term correlated in tick time. Hence, its mean value can well characterize an interval of many trades and turns out to be an important determinant for large aggregate returns. We also present a statistical model reproducing the cumulative distribution of aggregate returns. For an accurate agreement with the empirical distribution, we also take into account asymmetries of the step widths in different directions together with cross correlations between these asymmetries and the mean step width as well as the signs of the steps.

  11. Stability with large step sizes for multistep discretizations of stiff ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Majda, George

    1986-01-01

    One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.

  12. Giant Steps in Cefalù

    NASA Astrophysics Data System (ADS)

    Jeffery, David J.; Mazzali, Paolo A.

    2007-08-01

    Giant steps is a technique to accelerate Monte Carlo radiative transfer in optically-thick cells (which are isotropic and homogeneous in matter properties and into which astrophysical atmospheres are divided) by greatly reducing the number of Monte Carlo steps needed to propagate photon packets through such cells. In an optically-thick cell, packets starting from any point (which can be regarded a point source) well away from the cell wall act essentially as packets diffusing from the point source in an infinite, isotropic, homogeneous atmosphere. One can replace many ordinary Monte Carlo steps that a packet diffusing from the point source takes by a randomly directed giant step whose length is slightly less than the distance to the nearest cell wall point from the point source. The giant step is assigned a time duration equal to the time for the RMS radius for a burst of packets diffusing from the point source to have reached the giant step length. We call assigning giant-step time durations this way RMS-radius (RMSR) synchronization. Propagating packets by series of giant steps in giant-steps random walks in the interiors of optically-thick cells constitutes the technique of giant steps. Giant steps effectively replaces the exact diffusion treatment of ordinary Monte Carlo radiative transfer in optically-thick cells by an approximate diffusion treatment. In this paper, we describe the basic idea of giant steps and report demonstration giant-steps flux calculations for the grey atmosphere. Speed-up factors of order 100 are obtained relative to ordinary Monte Carlo radiative transfer. In practical applications, speed-up factors of order ten and perhaps more are possible. The speed-up factor is likely to be significantly application-dependent and there is a trade-off between speed-up and accuracy. This paper and past work suggest that giant-steps error can probably be kept to a few percent by using sufficiently large boundary-layer optical depths while still maintaining large speed-up factors. Thus, giant steps can be characterized as a moderate accuracy radiative transfer technique. For many applications, the loss of some accuracy may be a tolerable price to pay for the speed-ups gained by using giant steps.

  13. Vigilance and feeding behaviour in large feeding flocks of laughing gulls, Larus atricilla, on Delaware Bay

    NASA Astrophysics Data System (ADS)

    Burger, Joanna; Gochfeld, Michael

    1991-02-01

    Laughing gulls ( Larus atricilla) forage on horseshoe crab ( Limulus polyphemus) eggs during May in Delaware Bay each year. They feed in dense flocks, and foraging rates vary with vigilance, bird density, number of steps and location in the flock, whereas time devoted to vigilance is explained by number of steps, density, location and feeding rates. The time devoted to vigilance decreases with increasing density, increasing foraging rates and decreasing aggression. Birds foraging on the edge of flocks take fewer pecks and more steps, and devote more time to vigilance than those in the intermediate or central parts of a flock.

  14. The Technique of Changing the Drive Method of Micro Step Drive and Sensorless Drive for Hybrid Stepping Motor

    NASA Astrophysics Data System (ADS)

    Yoneda, Makoto; Dohmeki, Hideo

    The position control system with the advantage large torque, low vibration, and high resolution can be obtained by the constant current micro step drive applied to hybrid stepping motor. However loss is large, in order not to be concerned with load torque but to control current uniformly. As the one technique of a position control system in which high efficiency is realizable, the same sensorless control as a permanent magnet motor is effective. But, it was the purpose that the control method proposed until now controls speed. Then, this paper proposed changing the drive method of micro step drive and sensorless drive. The change of the drive method was verified from the simulation and the experiment. On no load, it was checked not producing change of a large speed at the time of a change by making electrical angle and carrying out zero reset of the integrator. On load, it was checked that a large speed change arose. The proposed system could change drive method by setting up the initial value of an integrator using the estimated result, without producing speed change. With this technique, the low loss position control system, which employed the advantage of the hybrid stepping motor, has been built.

  15. From mess to mass: a methodology for calculating storm event pollutant loads with their uncertainties, from continuous raw data time series.

    PubMed

    Métadier, M; Bertrand-Krajewski, J-L

    2011-01-01

    With the increasing implementation of continuous monitoring of both discharge and water quality in sewer systems, large data bases are now available. In order to manage large amounts of data and calculate various variables and indicators of interest it is necessary to apply automated methods for data processing. This paper deals with the processing of short time step turbidity time series to estimate TSS (Total Suspended Solids) and COD (Chemical Oxygen Demand) event loads in sewer systems during storm events and their associated uncertainties. The following steps are described: (i) sensor calibration, (ii) estimation of data uncertainties, (iii) correction of raw data, (iv) data pre-validation tests, (v) final validation, and (vi) calculation of TSS and COD event loads and estimation of their uncertainties. These steps have been implemented in an integrated software tool. Examples of results are given for a set of 33 storm events monitored in a stormwater separate sewer system.

  16. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  17. EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES

    PubMed Central

    Börgers, Christoph; Nectow, Alexander R.

    2013-01-01

    Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276

  18. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  19. Geometric multigrid for an implicit-time immersed boundary method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.

    2014-10-12

    The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less

  20. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  1. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  2. Voluntary stepping behavior under single- and dual-task conditions in chronic stroke survivors: A comparison between the involved and uninvolved legs.

    PubMed

    Melzer, Itshak; Goldring, Melissa; Melzer, Yehudit; Green, Elad; Tzedek, Irit

    2010-12-01

    If balance is lost, quick step execution can prevent falls. Research has shown that speed of voluntary stepping was able to predict future falls in old adults. The aim of the study was to investigate voluntary stepping behavior, as well as to compare timing and leg push-off force-time relation parameters of involved and uninvolved legs in stroke survivors during single- and dual-task conditions. We also aimed to compare timing and leg push-off force-time relation parameters between stroke survivors and healthy individuals in both task conditions. Ten stroke survivors performed a voluntary step execution test with their involved and uninvolved legs under two conditions: while focusing only on the stepping task and while a separate attention-demanding task was performed simultaneously. Temporal parameters related to the step time were measured including the duration of the step initiation phase, the preparatory phase, the swing phase, and the total step time. In addition, force-time parameters representing the push-off power during stepping were calculated from ground reaction data and compared with 10 healthy controls. The involved legs of stroke survivors had a significantly slower stepping time than uninvolved legs due to increased swing phase duration during both single- and dual-task conditions. For dual compared to single task, the stepping time increased significantly due to a significant increase in the duration of step initiation. In general, the force time parameters were significantly different in both legs of stroke survivors as compared to healthy controls, with no significant effect of dual compared with single-task conditions in both groups. The inability of stroke survivors to swing the involved leg quickly may be the most significant factor contributing to the large number of falls to the paretic side. The results suggest that stroke survivors were unable to rapidly produce muscle force in fast actions. This may be the mechanism of delayed execution of a fast step when balance is lost, thus increasing the likelihood of falls in stroke survivors. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Fractal analysis of lateral movement in biomembranes.

    PubMed

    Gmachowski, Lech

    2018-04-01

    Lateral movement of a molecule in a biomembrane containing small compartments (0.23-μm diameter) and large ones (0.75 μm) is analyzed using a fractal description of its walk. The early time dependence of the mean square displacement varies from linear due to the contribution of ballistic motion. In small compartments, walking molecules do not have sufficient time or space to develop an asymptotic relation and the diffusion coefficient deduced from the experimental records is lower than that measured without restrictions. The model makes it possible to deduce the molecule step parameters, namely the step length and time, from data concerning confined and unrestricted diffusion coefficients. This is also possible using experimental results for sub-diffusive transport. The transition from normal to anomalous diffusion does not affect the molecule step parameters. The experimental literature data on molecular trajectories recorded at a high time resolution appear to confirm the modeled value of the mean free path length of DOPE for Brownian and anomalous diffusion. Although the step length and time give the proper values of diffusion coefficient, the DOPE speed calculated as their quotient is several orders of magnitude lower than the thermal speed. This is interpreted as a result of intermolecular interactions, as confirmed by lateral diffusion of other molecules in different membranes. The molecule step parameters are then utilized to analyze the problem of multiple visits in small compartments. The modeling of the diffusion exponent results in a smooth transition to normal diffusion on entering a large compartment, as observed in experiments.

  4. Mutational Effects and Population Dynamics During Viral Adaptation Challenge Current Models

    PubMed Central

    Miller, Craig R.; Joyce, Paul; Wichman, Holly A.

    2011-01-01

    Adaptation in haploid organisms has been extensively modeled but little tested. Using a microvirid bacteriophage (ID11), we conducted serial passage adaptations at two bottleneck sizes (104 and 106), followed by fitness assays and whole-genome sequencing of 631 individual isolates. Extensive genetic variation was observed including 22 beneficial, several nearly neutral, and several deleterious mutations. In the three large bottleneck lines, up to eight different haplotypes were observed in samples of 23 genomes from the final time point. The small bottleneck lines were less diverse. The small bottleneck lines appeared to operate near the transition between isolated selective sweeps and conditions of complex dynamics (e.g., clonal interference). The large bottleneck lines exhibited extensive interference and less stochasticity, with multiple beneficial mutations establishing on a variety of backgrounds. Several leapfrog events occurred. The distribution of first-step adaptive mutations differed significantly from the distribution of second-steps, and a surprisingly large number of second-step beneficial mutations were observed on a highly fit first-step background. Furthermore, few first-step mutations appeared as second-steps and second-steps had substantially smaller selection coefficients. Collectively, the results indicate that the fitness landscape falls between the extremes of smooth and fully uncorrelated, violating the assumptions of many current mutational landscape models. PMID:21041559

  5. Cycle time and cost reduction in large-size optics production

    NASA Astrophysics Data System (ADS)

    Hallock, Bob; Shorey, Aric; Courtney, Tom

    2005-09-01

    Optical fabrication process steps have remained largely unchanged for decades. Raw glass blanks have been rough-machined, generated to near net shape, loose abrasive or fine bound diamond ground and then polished. This set of processes is sequential and each subsequent operation removes the damage and micro cracking induced by the prior operational step. One of the long-lead aspects of this process has been the glass polishing. Primarily, this has been driven by the need to remove relatively large volumes of glass material compared to the polishing removal rate to ensure complete damage removal. The secondary time driver has been poor convergence to final figure and the corresponding polish-metrology cycles. The overall cycle time and resultant cost due to labor, equipment utilization and shop efficiency is increased, often significantly, when the optical prescription is aspheric. In addition to the long polishing cycle times, the duration of the polishing time is often very difficult to predict given that current polishing processes are not deterministic processes. This paper will describe a novel approach to large optics finishing, relying on several innovative technologies to be presented and illustrated through a variety of examples. The cycle time reductions enabled by this approach promises to result in significant cost and lead-time reductions for large size optics. In addition, corresponding increases in throughput will provide for less capital expenditure per square meter of optic produced. This process, comparative cycles time estimates and preliminary results will be discussed.

  6. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  7. Impurity effects in crystal growth from solutions: Steady states, transients and step bunch motion

    NASA Astrophysics Data System (ADS)

    Ranganathan, Madhav; Weeks, John D.

    2014-05-01

    We analyze a recently formulated model in which adsorbed impurities impede the motion of steps in crystals grown from solutions, while moving steps can remove or deactivate adjacent impurities. In this model, the chemical potential change of an atom on incorporation/desorption to/from a step is calculated for different step configurations and used in the dynamical simulation of step motion. The crucial difference between solution growth and vapor growth is related to the dependence of the driving force for growth of the main component on the size of the terrace in front of the step. This model has features resembling experiments in solution growth, which yields a dead zone with essentially no growth at low supersaturation and the motion of large coherent step bunches at larger supersaturation. The transient behavior shows a regime wherein steps bunch together and move coherently as the bunch size increases. The behavior at large line tension is reminiscent of the kink-poisoning mechanism of impurities observed in calcite growth. Our model unifies different impurity models and gives a picture of nonequilibrium dynamics that includes both steady states and time dependent behavior and shows similarities with models of disordered systems and the pinning/depinning transition.

  8. Step scaling and the Yang-Mills gradient flow

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2014-06-01

    The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.

  9. Effects of Turbulence Model and Numerical Time Steps on Von Karman Flow Behavior and Drag Accuracy of Circular Cylinder

    NASA Astrophysics Data System (ADS)

    Amalia, E.; Moelyadi, M. A.; Ihsan, M.

    2018-04-01

    The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.

  10. Unsteady Crystal Growth Due to Step-Bunch Cascading

    NASA Technical Reports Server (NTRS)

    Vekilov, Peter G.; Lin, Hong; Rosenberger, Franz

    1997-01-01

    Based on our experimental findings of growth rate fluctuations during the crystallization of the protein lysozym, we have developed a numerical model that combines diffusion in the bulk of a solution with diffusive transport to microscopic growth steps that propagate on a finite crystal facet. Nonlinearities in layer growth kinetics arising from step interaction by bulk and surface diffusion, and from step generation by surface nucleation, are taken into account. On evaluation of the model with properties characteristic for the solute transport, and the generation and propagation of steps in the lysozyme system, growth rate fluctuations of the same magnitude and characteristic time, as in the experiments, are obtained. The fluctuation time scale is large compared to that of step generation. Variations of the governing parameters of the model reveal that both the nonlinearity in step kinetics and mixed transport-kinetics control of the crystallization process are necessary conditions for the fluctuations. On a microscopic scale, the fluctuations are associated with a morphological instability of the vicinal face, in which a step bunch triggers a cascade of new step bunches through the microscopic interfacial supersaturation distribution.

  11. Measuring the Daily Activity of Lying Down, Sitting, Standing and Stepping of Obese Children Using the ActivPALTM Activity Monitor.

    PubMed

    Wafa, Sharifah Wajihah; Aziz, Nur Nadzirah; Shahril, Mohd Razif; Halib, Hasmiza; Rahim, Marhasiyah; Janssen, Xanne

    2017-04-01

    This study describes the patterns of objectively measured sitting, standing and stepping in obese children using the activPALTM and highlights possible differences in sedentary levels and patterns during weekdays and weekends. Sixty-five obese children, aged 9-11 years, were recruited from primary schools in Terengganu, Malaysia. Sitting, standing and stepping were objectively measured using an activPALTM accelerometer over a period of 4-7 days. Obese children spent an average of 69.6% of their day sitting/lying, 19.1% standing and 11.3% stepping. Weekdays and weekends differed significantly in total time spent sitting/lying, standing, stepping, step count, number of sedentary bouts and length of sedentary bouts (p < 0.05, respectively). Obese children spent a large proportion of their time sedentarily, and they spent more time sedentarily during weekends compared with weekdays. This study on sedentary behaviour patterns presents valuable information for designing and implementing strategies to decrease sedentary time among obese children, particularly during weekends. © The Author [2016]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. A quick response four decade logarithmic high-voltage stepping supply

    NASA Technical Reports Server (NTRS)

    Doong, H.

    1978-01-01

    An improved high-voltage stepping supply, for space instrumentation is described where low power consumption and fast settling time between steps are required. The high-voltage stepping supply, utilizing an average power of 750 milliwatts, delivers a pair of mirror images with 64 level logarithmic outputs. It covers a four decade range of + or - 2500 to + or - 0.29 volts having an output stability of + or - 0.5 percent or + or - 20 millivolts for all line load and temperature variations. The supply provides a typical step setting time of 1 millisecond with 100 microseconds for the lower two decades. The versatile design features of the high-voltage stepping supply provides a quick response staircase generator as described or a fixed voltage with the option to change levels as required over large dynamic ranges without circuit modifications. The concept can be implemented up to + or - 5000 volts. With these design features, the high-voltage stepping supply should find numerous applications where charged particle detection, electro-optical systems, and high voltage scientific instruments are used.

  13. Effect of water hardness on cardiovascular mortality: an ecological time series approach.

    PubMed

    Lake, I R; Swift, L; Catling, L A; Abubakar, I; Sabel, C E; Hunter, P R

    2010-12-01

    Numerous studies have suggested an inverse relationship between drinking water hardness and cardiovascular disease. However, the weight of evidence is insufficient for the WHO to implement a health-based guideline for water hardness. This study followed WHO recommendations to assess the feasibility of using ecological time series data from areas exposed to step changes in water hardness to investigate this issue. Monthly time series of cardiovascular mortality data, subdivided by age and sex, were systematically collected from areas reported to have undergone step changes in water hardness, calcium and magnesium in England and Wales between 1981 and 2005. Time series methods were used to investigate the effect of water hardness changes on mortality. No evidence was found of an association between step changes in drinking water hardness or drinking water calcium and cardiovascular mortality. The lack of areas with large populations and a reasonable change in magnesium levels precludes a definitive conclusion about the impact of this cation. We use our results on the variability of the series to consider the data requirements (size of population, time of water hardness change) for such a study to have sufficient power. Only data from areas with large populations (>500,000) are likely to be able to detect a change of the size suggested by previous studies (rate ratio of 1.06). Ecological time series studies of populations exposed to changes in drinking water hardness may not be able to provide conclusive evidence on the links between water hardness and cardiovascular mortality unless very large populations are studied. Investigations of individuals may be more informative.

  14. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    DOE PAGES

    Yu, Sungduk; Pritchard, Michael S.

    2015-12-17

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m 2) and longwave cloud forcing (~5 W/m 2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation ismore » more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less

  15. The effect of large-scale model time step and multiscale coupling frequency on cloud climatology, vertical structure, and rainfall extremes in a superparameterized GCM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sungduk; Pritchard, Michael S.

    The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m 2) and longwave cloud forcing (~5 W/m 2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation ismore » more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less

  16. A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haut, T. S.; Babb, T.; Martinsson, P. G.

    2015-06-16

    Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less

  17. Remote visual analysis of large turbulence databases at multiple scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  18. Remote visual analysis of large turbulence databases at multiple scales

    DOE PAGES

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin; ...

    2018-06-15

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  19. Interactive Exploration and Analysis of Large-Scale Simulations Using Topology-Based Data Segmentation.

    PubMed

    Bremer, Peer-Timo; Weber, Gunther; Tierny, Julien; Pascucci, Valerio; Day, Marcus S; Bell, John B

    2011-09-01

    Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications, these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single-pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of features for any given parameter selection in a postprocessing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework by extracting and analyzing burning cells from a large-scale turbulent combustion simulation. In particular, we show how the statistical analysis enabled by our techniques provides new insight into the combustion process.

  20. SAR correlation technique - An algorithm for processing data with large range walk

    NASA Technical Reports Server (NTRS)

    Jin, M.; Wu, C.

    1983-01-01

    This paper presents an algorithm for synthetic aperture radar (SAR) azimuth correlation with extraneously large range migration effect which can not be accommodated by the existing frequency domain interpolation approach used in current SEASAT SAR processing. A mathematical model is first provided for the SAR point-target response in both the space (or time) and the frequency domain. A simple and efficient processing algorithm derived from the hybrid algorithm is then given. This processing algorithm enables azimuth correlation by two steps. The first step is a secondary range compression to handle the dispersion of the spectra of the azimuth response along range. The second step is the well-known frequency domain range migration correction approach for the azimuth compression. This secondary range compression can be processed simultaneously with range pulse compression. Simulation results provided here indicate that this processing algorithm yields a satisfactory compressed impulse response for SAR data with large range migration.

  1. Atomic force microscopic study of step bunching and macrostep formation during the growth of L-arginine phosphate monohydrate single crystals

    NASA Astrophysics Data System (ADS)

    Sangwal, K.; Torrent-Burgues, J.; Sanz, F.; Gorostiza, P.

    1997-02-01

    The experimental results of the formation of step bunches and macrosteps on the {100} face of L-arginine phosphate monohydrate crystals grown from aqueous solutions at different supersaturations studied by using atomic force microscopy are described and discussed. It was observed that (1) the step height does not remain constant with increasing time but fluctuates within a particular range of heights, which depends on the region of step bunches, (2) the maximum height and the slope of bunched steps increases with growth time as well as supersaturation used for growth, and that (3) the slope of steps of relatively small heights is usually low with a value of about 8° and does not depend on the region of formation of step bunches, but the slope of steps of large heights is up to 21°. Analysis of the experimental results showed that (1) at a particular value of supersaturation the ratio of the average step height to the average step spacing is a constant, suggesting that growth of the {100} face of L-arginine phosphate monohydrate crystals occurs by direct integration of growth entities to growth steps, and that (2) the formation of step bunches and macrosteps follows the dynamic theory of faceting, advanced by Vlachos et al.

  2. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  3. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  4. Asynchronous collision integrators: Explicit treatment of unilateral contact with friction and nodal restraints

    PubMed Central

    Wolff, Sebastian; Bucher, Christian

    2013-01-01

    This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806

  5. Rational reduction of periodic propagators for off-period observations.

    PubMed

    Blanton, Wyndham B; Logan, John W; Pines, Alexander

    2004-02-01

    Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.

  6. Effective image differencing with convolutional neural networks for real-time transient hunting

    NASA Astrophysics Data System (ADS)

    Sedaghat, Nima; Mahabal, Ashish

    2018-06-01

    Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.

  7. Two Independent Contributions to Step Variability during Over-Ground Human Walking

    PubMed Central

    Collins, Steven H.; Kuo, Arthur D.

    2013-01-01

    Human walking exhibits small variations in both step length and step width, some of which may be related to active balance control. Lateral balance is thought to require integrative sensorimotor control through adjustment of step width rather than length, contributing to greater variability in step width. Here we propose that step length variations are largely explained by the typical human preference for step length to increase with walking speed, which itself normally exhibits some slow and spontaneous fluctuation. In contrast, step width variations should have little relation to speed if they are produced more for lateral balance. As a test, we examined hundreds of overground walking steps by healthy young adults (N = 14, age < 40 yrs.). We found that slow fluctuations in self-selected walking speed (2.3% coefficient of variation) could explain most of the variance in step length (59%, P < 0.01). The residual variability not explained by speed was small (1.5% coefficient of variation), suggesting that step length is actually quite precise if not for the slow speed fluctuations. Step width varied over faster time scales and was independent of speed fluctuations, with variance 4.3 times greater than that for step length (P < 0.01) after accounting for the speed effect. That difference was further magnified by walking with eyes closed, which appears detrimental to control of lateral balance. Humans appear to modulate fore-aft foot placement in precise accordance with slow fluctuations in walking speed, whereas the variability of lateral foot placement appears more closely related to balance. Step variability is separable in both direction and time scale into balance- and speed-related components. The separation of factors not related to balance may reveal which aspects of walking are most critical for the nervous system to control. PMID:24015308

  8. Self-consistent predictor/corrector algorithms for stable and efficient integration of the time-dependent Kohn-Sham equation

    NASA Astrophysics Data System (ADS)

    Zhu, Ying; Herbert, John M.

    2018-01-01

    The "real time" formulation of time-dependent density functional theory (TDDFT) involves integration of the time-dependent Kohn-Sham (TDKS) equation in order to describe the time evolution of the electron density following a perturbation. This approach, which is complementary to the more traditional linear-response formulation of TDDFT, is more efficient for computation of broad-band spectra (including core-excited states) and for systems where the density of states is large. Integration of the TDKS equation is complicated by the time-dependent nature of the effective Hamiltonian, and we introduce several predictor/corrector algorithms to propagate the density matrix, one of which can be viewed as a self-consistent extension of the widely used modified-midpoint algorithm. The predictor/corrector algorithms facilitate larger time steps and are shown to be more efficient despite requiring more than one Fock build per time step, and furthermore can be used to detect a divergent simulation on-the-fly, which can then be halted or else the time step modified.

  9. Optimal space communications techniques. [discussion of video signals and delta modulation

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1974-01-01

    The encoding of video signals using the Song Adaptive Delta Modulator (Song ADM) is discussed. The video signals are characterized as a sequence of pulses having arbitrary height and width. Although the ADM is suited to tracking signals having fast rise times, it was found that the DM algorithm (which permits an exponential rise for estimating an input step) results in a large overshoot and an underdamped response to the step. An overshoot suppression algorithm which significantly reduces the ringing while not affecting the rise time is presented along with formuli for the rise time and the settling time. Channel errors and their effect on the DM encoded bit stream were investigated.

  10. Single-step scanner-based digital image correlation (SB-DIC) method for large deformation mapping in rubber

    NASA Astrophysics Data System (ADS)

    Goh, C. P.; Ismail, H.; Yen, K. S.; Ratnam, M. M.

    2017-01-01

    The incremental digital image correlation (DIC) method has been applied in the past to determine strain in large deformation materials like rubber. This method is, however, prone to cumulative errors since the total displacement is determined by combining the displacements in numerous stages of the deformation. In this work, a method of mapping large strains in rubber using DIC in a single-step without the need for a series of deformation images is proposed. The reference subsets were deformed using deformation factors obtained from the fitted mean stress-axial stretch ratio curve obtained experimentally and the theoretical Poisson function. The deformed reference subsets were then correlated with the deformed image after loading. The recently developed scanner-based digital image correlation (SB-DIC) method was applied on dumbbell rubber specimens to obtain the in-plane displacement fields up to 350% axial strain. Comparison of the mean axial strains determined from the single-step SB-DIC method with those from the incremental SB-DIC method showed an average difference of 4.7%. Two rectangular rubber specimens containing circular and square holes were deformed and analysed using the proposed method. The resultant strain maps from the single-step SB-DIC method were compared with the results of finite element modeling (FEM). The comparison shows that the proposed single-step SB-DIC method can be used to map the strain distribution accurately in large deformation materials like rubber at much shorter time compared to the incremental DIC method.

  11. Variation of the channel temperature in the transmission of lightning leader

    NASA Astrophysics Data System (ADS)

    Chang, Xuan; Yuan, Ping; Cen, Jianyong; Wang, Xuejuan

    2017-06-01

    According to the time-resolved spectra of the lightning stepped leader and dart leader processes, the channel temperature, its evolution characteristics with time and the variation along the channel height in the transmission process were analyzed. The results show that the stepped leader tip has a slightly higher temperature than the trailing end, which should be caused by a large amount of electric charges on the leader tip. In addition, both temperature and brightness are enhanced at the position of the channel node. The dart leader has a higher channel temperature than the stepped leader but a lower temperature than the return stroke. Meanwhile, the channel temperature of the dart leader obviously increases when the dart leader propagates to the ground.

  12. A family of compact high order coupled time-space unconditionally stable vertical advection schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, Florian; Debreu, Laurent

    2016-04-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.

  13. Enhanced conformational sampling via novel variable transformations and very large time-step molecular dynamics

    NASA Astrophysics Data System (ADS)

    Tuckerman, Mark

    2006-03-01

    One of the computational grand challenge problems is to develop methodology capable of sampling conformational equilibria in systems with rough energy landscapes. If met, many important problems, most notably protein folding, could be significantly impacted. In this talk, two new approaches for addressing this problem will be presented. First, it will be shown how molecular dynamics can be combined with a novel variable transformation designed to warp configuration space in such a way that barriers are reduced and attractive basins stretched. This method rigorously preserves equilibrium properties while leading to very large enhancements in sampling efficiency. Extensions of this approach to the calculation/exploration of free energy surfaces will be discussed. Next, a new very large time-step molecular dynamics method will be introduced that overcomes the resonances which plague many molecular dynamics algorithms. The performance of the methods is demonstrated on a variety of systems including liquid water, long polymer chains simple protein models, and oligopeptides.

  14. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  15. Development and verification of a real-time stochastic precipitation nowcasting system for urban hydrology in Belgium

    NASA Astrophysics Data System (ADS)

    Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.

    2016-01-01

    The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e., the observation that large-scale rainfall structures are more persistent and predictable than small-scale convective cells. This paper presents the development, adaptation and verification of the STEPS system for Belgium (STEPS-BE). STEPS-BE provides in real-time 20-member ensemble precipitation nowcasts at 1 km and 5 min resolutions up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 75-90 % of the forecast errors.

  16. Development and verification of a real-time stochastic precipitation nowcasting system for urban hydrology in Belgium

    NASA Astrophysics Data System (ADS)

    Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.

    2015-07-01

    The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e. the observation that large scale rainfall structures are more persistent and predictable than small scale convective cells. This paper presents the development, adaptation and verification of the system STEPS for Belgium (STEPS-BE). STEPS-BE provides in real-time 20 member ensemble precipitation nowcasts at 1 km and 5 min resolution up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 80-90 % of the forecast errors.

  17. A novel method to accurately locate and count large numbers of steps by photobleaching

    PubMed Central

    Tsekouras, Konstantinos; Custer, Thomas C.; Jashnsaz, Hossein; Walter, Nils G.; Pressé, Steve

    2016-01-01

    Photobleaching event counting is a single-molecule fluorescence technique that is increasingly being used to determine the stoichiometry of protein and RNA complexes composed of many subunits in vivo as well as in vitro. By tagging protein or RNA subunits with fluorophores, activating them, and subsequently observing as the fluorophores photobleach, one obtains information on the number of subunits in a complex. The noise properties in a photobleaching time trace depend on the number of active fluorescent subunits. Thus, as fluorophores stochastically photobleach, noise properties of the time trace change stochastically, and these varying noise properties have created a challenge in identifying photobleaching steps in a time trace. Although photobleaching steps are often detected by eye, this method only works for high individual fluorophore emission signal-to-noise ratios and small numbers of fluorophores. With filtering methods or currently available algorithms, it is possible to reliably identify photobleaching steps for up to 20–30 fluorophores and signal-to-noise ratios down to ∼1. Here we present a new Bayesian method of counting steps in photobleaching time traces that takes into account stochastic noise variation in addition to complications such as overlapping photobleaching events that may arise from fluorophore interactions, as well as on-off blinking. Our method is capable of detecting ≥50 photobleaching steps even for signal-to-noise ratios as low as 0.1, can find up to ≥500 steps for more favorable noise profiles, and is computationally inexpensive. PMID:27654946

  18. An approach to improving transporting velocity in the long-range ultrasonic transportation of micro-particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jianxin; Mei, Deqing, E-mail: meidq-127@zju.edu.cn; Yang, Keji

    2014-08-14

    In existing ultrasonic transportation methods, the long-range transportation of micro-particles is always realized in step-by-step way. Due to the substantial decrease of the driving force in each step, the transportation is lower-speed and stair-stepping. To improve the transporting velocity, a non-stepping ultrasonic transportation approach is proposed. By quantitatively analyzing the acoustic potential well, an optimal region is defined as the position, where the largest driving force is provided under the condition that the driving force is simultaneously the major component of an acoustic radiation force. To keep the micro-particle trapped in the optimal region during the whole transportation process, anmore » approach of optimizing the phase-shifting velocity and phase-shifting step is adopted. Due to the stable and large driving force, the displacement of the micro-particle is an approximately linear function of time, instead of a stair-stepping function of time as in the existing step-by-step methods. An experimental setup is also developed to validate this approach. Long-range ultrasonic transportations of zirconium beads with high transporting velocity were realized. The experimental results demonstrated that this approach is an effective way to improve transporting velocity in the long-range ultrasonic transportation of micro-particles.« less

  19. Time Resolved Stereo Particle Image Velocimetry Measurements of the Instabilities Downstream of a Backward-Facing Step in a Swept-Wing Boundary Layer

    NASA Technical Reports Server (NTRS)

    Eppink, Jenna L.; Yao, Chung-Sheng

    2017-01-01

    Time-resolved particle image velocimetry (TRPIV) measurements are performed down-stream of a swept backward-facing step, with a height of 49% of the boundary-layer thickness. The results agree well qualitatively with previously reported hotwire measurements, though the amplitudes of the fluctuating components measured using TRPIV are higher. Nonetheless, the low-amplitude instabilities in the flow are fairly well resolved using TR- PIV. Proper orthogonal decomposition is used to study the development of the traveling cross flow and Tollmien-Schlichting (TS) instabilities downstream of the step and to study how they interact to form the large velocity spikes that ultimately lead to transition. A secondary mode within the traveling cross flow frequency band develops with a wavelength close to that of the stationary cross flow instability, so that at a certain point in the phase, it causes an increase in the spanwise modulation initially caused by the stationary cross flow mode. This increased modulation leads to an increase in the amplitude of the TS mode, which, itself, is highly modulated through interactions with the stationary cross flow. When the traveling cross flow and TS modes align in time and space, the large velocity spikes occur. Thus, these three instabilities, which are individually of low amplitude when the spikes start to occur (U'rms/Ue <0.03), interact and combine to cause a large flow disturbance that eventually leads to transition.

  20. Sprint Running Performance and Technique Changes in Athletes During Periodized Training: An Elite Training Group Case Study.

    PubMed

    Bezodis, Ian N; Kerwin, David G; Cooper, Stephen-Mark; Salo, Aki I T

    2017-11-15

    To understand how training periodization influences sprint performance and key step characteristics over an extended training period in an elite sprint training group. Four sprinters were studied during five months of training. Step velocities, step lengths and step frequencies were measured from video of the maximum velocity phase of training sprints. Bootstrapped mean values were calculated for each athlete for each session and 139 within-athlete, between-session comparisons were made with a repeated measures ANOVA. As training progressed, a link in the changes in velocity and step frequency was maintained. There were 71 between-session comparisons with a change in step velocity yielding at least a large effect size (>1.2), of which 73% had a correspondingly large change in step frequency in the same direction. Within-athlete mean session step length remained relatively constant throughout. Reductions in step velocity and frequency occurred during training phases of high volume lifting and running, with subsequent increases in step velocity and frequency happening during phases of low volume lifting and high intensity sprint work. The importance of step frequency over step length to the changes in performance within a training year was clearly evident for the sprinters studied. Understanding the magnitudes and timings of these changes in relation to the training program is important for coaches and athletes. The underpinning neuro-muscular mechanisms require further investigation, but are likely explained by an increase in force producing capability followed by an increase in the ability to produce that force rapidly.

  1. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.

  2. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  3. Long-term Outcomes After Stepping Down Asthma Controller Medications: A Claims-Based, Time-to-Event Analysis.

    PubMed

    Rank, Matthew A; Johnson, Ryan; Branda, Megan; Herrin, Jeph; van Houten, Holly; Gionfriddo, Michael R; Shah, Nilay D

    2015-09-01

    Long-term outcomes after stepping down asthma medications are not well described. This study was a retrospective time-to-event analysis of individuals diagnosed with asthma who stepped down their asthma controller medications using a US claims database spanning 2000 to 2012. Four-month intervals were established and a step-down event was defined by a ≥ 50% decrease in days-supplied of controller medications from one interval to the next; this definition is inclusive of step-down that occurred without health-care provider guidance or as a consequence of a medication adherence lapse. Asthma stability in the period prior to step-down was defined by not having an asthma exacerbation (inpatient visit, ED visit, or dispensing of a systemic corticosteroid linked to an asthma visit) and having fewer than two rescue inhaler claims in a 4-month period. The primary outcome in the period following step-down was time-to-first asthma exacerbation. Thirty-two percent of the 26,292 included individuals had an asthma exacerbation in the 24-month period following step-down of asthma controller medication, though only 7% had an ED visit or hospitalization for asthma. The length of asthma stability prior to stepping down asthma medication was strongly associated with the risk of an asthma exacerbation in the subsequent 24-month period: < 4 months' stability, 44%; 4 to 7 months, 34%; 8 to 11 months, 30%; and ≥ 12 months, 21% (P < .001). In a large, claims-based, real-world study setting, 32% of individuals have an asthma exacerbation in the 2 years following a step-down event.

  4. Age-related changes in gait adaptability in response to unpredictable obstacles and stepping targets.

    PubMed

    Caetano, Maria Joana D; Lord, Stephen R; Schoene, Daniel; Pelicioni, Paulo H S; Sturnieks, Daina L; Menant, Jasmine C

    2016-05-01

    A large proportion of falls in older people occur when walking. Limitations in gait adaptability might contribute to tripping; a frequently reported cause of falls in this group. To evaluate age-related changes in gait adaptability in response to obstacles or stepping targets presented at short notice, i.e.: approximately two steps ahead. Fifty older adults (aged 74±7 years; 34 females) and 21 young adults (aged 26±4 years; 12 females) completed 3 usual gait speed (baseline) trials. They then completed the following randomly presented gait adaptability trials: obstacle avoidance, short stepping target, long stepping target and no target/obstacle (3 trials of each). Compared with the young, the older adults slowed significantly in no target/obstacle trials compared with the baseline trials. They took more steps and spent more time in double support while approaching the obstacle and stepping targets, demonstrated poorer stepping accuracy and made more stepping errors (failed to hit the stepping targets/avoid the obstacle). The older adults also reduced velocity of the two preceding steps and shortened the previous step in the long stepping target condition and in the obstacle avoidance condition. Compared with their younger counterparts, the older adults exhibited a more conservative adaptation strategy characterised by slow, short and multiple steps with longer time in double support. Even so, they demonstrated poorer stepping accuracy and made more stepping errors. This reduced gait adaptability may place older adults at increased risk of falling when negotiating unexpected hazards. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  6. Exoplanet Direct Imaging: Coronagraph Probe Mission Study EXO-C

    NASA Technical Reports Server (NTRS)

    Stapelfeldt, Karl R.

    2013-01-01

    Flagship mission for spectroscopy of ExoEarths is a long-term priority for space astrophysics (Astro2010). Requires 10(exp 10) contrast at 3 lambda/D separation, ( (is) greater than 10,000 times beyond HST performance) and large telescope (is) greater than 4m aperture. Big step. Mission for spectroscopy of giant planets and imaging of disks requires 10(exp 9) contrast at 3 lambda/D (already demonstrated in lab) and (is) approximately 1.5m telescope. Should be much more affordable, good intermediate step.Various PIs have proposed many versions of the latter mission 17 times since 1999; no unified approach.

  7. Microphysical Timescales in Clouds and their Application in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xiping; Tao, Wei-Kuo; Simpson, Joanne

    2007-01-01

    Independent prognostic variables in cloud-resolving modeling are chosen on the basis of the analysis of microphysical timescales in clouds versus a time step for numerical integration. Two of them are the moist entropy and the total mixing ratio of airborne water with no contributions from precipitating particles. As a result, temperature can be diagnosed easily from those prognostic variables, and cloud microphysics be separated (or modularized) from moist thermodynamics. Numerical comparison experiments show that those prognostic variables can work well while a large time step (e.g., 10 s) is used for numerical integration.

  8. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  9. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  10. Simplified jet fuel reaction mechanism for lean burn combustion application

    NASA Technical Reports Server (NTRS)

    Lee, Chi-Ming; Kundu, Krishna; Ghorashi, Bahman

    1993-01-01

    Successful modeling of combustion and emissions in gas turbine engine combustors requires an adequate description of the reaction mechanism. Detailed mechanisms contain a large number of chemical species participating simultaneously in many elementary kinetic steps. Current computational fluid dynamic models must include fuel vaporization, fuel-air mixing, chemical reactions, and complicated boundary geometries. A five-step Jet-A fuel mechanism which involves pyrolysis and subsequent oxidation of paraffin and aromatic compounds is presented. This mechanism is verified by comparing with Jet-A fuel ignition delay time experimental data, and species concentrations obtained from flametube experiments. This five-step mechanism appears to be better than the current one- and two-step mechanisms.

  11. Method of Lines Transpose an Implicit Vlasov Maxwell Solver for Plasmas

    DTIC Science & Technology

    2015-04-17

    boundary crossings should be rare. Numerical results for the Bennett pinch are given in Figure 9. In order to resolve large gradients near the center of the...contributing to the large error at the center of the beam due to large gradients there) and with the finite beam cut-off radius and the outflow boundary...usable time step size can be limited by the numerical accuracy of the method when there are large gradients (high-frequency content) in the solution. We

  12. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less

  13. A multi-layer steganographic method based on audio time domain segmented and network steganography

    NASA Astrophysics Data System (ADS)

    Xue, Pengfei; Liu, Hanlin; Hu, Jingsong; Hu, Ronggui

    2018-05-01

    Both audio steganography and network steganography are belong to modern steganography. Audio steganography has a large capacity. Network steganography is difficult to detect or track. In this paper, a multi-layer steganographic method based on the collaboration of them (MLS-ATDSS&NS) is proposed. MLS-ATDSS&NS is realized in two covert layers (audio steganography layer and network steganography layer) by two steps. A new audio time domain segmented steganography (ATDSS) method is proposed in step 1, and the collaboration method of ATDSS and NS is proposed in step 2. The experimental results showed that the advantage of MLS-ATDSS&NS over others is better trade-off between capacity, anti-detectability and robustness, that means higher steganographic capacity, better anti-detectability and stronger robustness.

  14. Non-equilibrium calculations of atmospheric processes initiated by electron impact.

    NASA Astrophysics Data System (ADS)

    Campbell, L.; Brunger, M. J.

    2007-05-01

    Electron impact in the atmosphere produces ionisation, dissociation, electronic excitation and vibrational excitation of atoms and molecules. The products can then take part in chemical reactions, recombination with electrons, or radiative or collisional deactivation. While most such processes are fast, some longer--lived species do not reach equilibrium. The electron source (photoelectrons or auroral electrons) also varies over time and longer-lived species can move substantially in altitude by molecular, ambipolar or eddy diffusion. Hence non-equilibrium calculations are required in some circumstances. Such time-step calculations need to have sufficiently short steps so that the fastest processes are still calculated correctly, but this can lead to computation times that are too large. Hence techniques to allow for longer time steps by incorporating equilibrium calculations are described. Examples are given for results of atmospheric non-equilibrium calculations, including the populations of the vibrational levels of ground state N2, the electron density and its dependence on vibrationally excited N2, predictions of nitric oxide density, and detailed processes during short duration auroral events.

  15. Improvisation Begins with Exploration: Giving Students Time to Explore the Sounds They Can Make with Their Instruments and Voices Is the First Step to Helping Them Become Successful Improvisers

    ERIC Educational Resources Information Center

    Volz, Micah D.

    2005-01-01

    Improvisation can be difficult to teach in any music classroom, but it can be particularly problematic in large ensembles like band, chorus, or orchestra. John Kratus proposes seven levels of improvisation, with exploration as the first step in the development of improvisation skills. Through experiences in making sounds, children begin to develop…

  16. Core-shell TiO2@ZnO nanorods for efficient ultraviolet photodetection

    NASA Astrophysics Data System (ADS)

    Panigrahi, Shrabani; Basak, Durga

    2011-05-01

    Core-shell TiO2@ZnO nanorods (NRs) have been fabricated by a simple two step method: growth of ZnO NRs' array by an aqueous chemical technique and then coating of the NRs with a solution of titanium isopropoxide [Ti(OC3H7)4] followed by a heating step to form the shell. The core-shell nanocomposites are composed of single-crystalline ZnO NRs, coated with a thin TiO2 shell layer obtained by varying the number of coatings (one, three and five times). The ultraviolet (UV) emission intensity of the nanocomposite is largely quenched due to an efficient electron-hole separation reducing the band-to-band recombinations. The UV photoconductivity of the core-shell structure with three times TiO2 coating has been largely enhanced due to photoelectron transfer between the core and the shell. The UV photosensitivity of the nanocomposite becomes four times larger while the photocurrent decay during steady UV illumination has been decreased almost by 7 times compared to the as-grown ZnO NRs indicating high efficiency of these core-shell structures as UV sensors.

  17. PVT: An Efficient Computational Procedure to Speed up Next-generation Sequence Analysis

    PubMed Central

    2014-01-01

    Background High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat’s serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. Results We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during ‘spliced alignment’ and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. Conclusions PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an improved performance of ~41% over TopHat (for the chosen data) with respect to execution time. Moreover we propose PVT-Cloud which implements PVT pipeline in cloud computing system. PMID:24894600

  18. PVT: an efficient computational procedure to speed up next-generation sequence analysis.

    PubMed

    Maji, Ranjan Kumar; Sarkar, Arijita; Khatua, Sunirmal; Dasgupta, Subhasis; Ghosh, Zhumur

    2014-06-04

    High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat's serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during 'spliced alignment' and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an improved performance of ~41% over TopHat (for the chosen data) with respect to execution time. Moreover we propose PVT-Cloud which implements PVT pipeline in cloud computing system.

  19. Effect of time step size and turbulence model on the open water hydrodynamic performance prediction of contra-rotating propellers

    NASA Astrophysics Data System (ADS)

    Wang, Zhan-zhi; Xiong, Ying

    2013-04-01

    A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

  20. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  1. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    PubMed Central

    Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang

    2016-01-01

    Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341

  2. A New Family of Compact High Order Coupled Time-Space Unconditionally Stable Vertical Advection Schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, F.; Debreu, L.

    2016-02-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.

  3. A New Insight into the Mechanism of NADH Model Oxidation by Metal Ions in Non-Alkaline Media.

    PubMed

    Yang, Jin-Dong; Chen, Bao-Long; Zhu, Xiao-Qing

    2018-06-11

    For a long time, it has been controversial that the three-step (e-H+-e) or two-step (e-H•) mechanism was used for the oxidations of NADH and its models by metal ions in non-alkaline media. The latter mechanism has been accepted by the majority of researchers. In this work, 1-benzyl-1,4-dihydronicotinamide (BNAH) and 1-phenyl-l,4-dihydronicotinamide (PNAH) are used as NADH models, and ferrocenium (Fc+) metal ion as an electron acceptor. The kinetics for oxidations of the NADH models by Fc+ in pure acetonitrile were monitored by using UV-Vis absorption and quadratic relationship between of kobs and the concentrations of NADH models were found for the first time. The rate expression of the reactions developed according to the three-step mechanism is quite consistent with the quadratic curves. The rate constants, thermodynamic driving forces and KIEs of each elementary step for the reactions were estimated. All the results supported the three-step mechanism. The intrinsic kinetic barriers of the proton transfer from BNAH+• to BNAH and the hydrogen atom transfer from BNAH+• to BNAH+• were estimated, the results showed that the former is 11.8 kcal/mol, and the latter is larger than 24.3 kcal/mol. It is the large intrinsic kinetic barrier of the hydrogen atom transfer that makes the reactions choose the three-step rather than two-step mechanism. Further investigation of the factors affecting the intrinsic kinetic barrier of chemical reactions indicated that the large intrinsic kinetic barrier of the hydrogen atom transfer originated from the repulsion of positive charges between BNAH+• and BNAH+•. The greatest contribution of this work is the discovery of the quadratic dependence of kobs on the concentrations of the NADH models, which is inconsistent with the conventional viewpoint of the "two-step mechanism" on the oxidations of NADH and its models by metal ions in the non-alkaline media.

  4. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 2. TimeSync — Tools for calibration and validation

    Treesearch

    Warren B. Cohen; Zhiqiang Yang; Robert Kennedy

    2010-01-01

    Availability of free, high quality Landsat data portends a new era in remote sensing change detection. Using dense (~annual) Landsat time series (LTS), we can now characterize vegetation change over large areas at an annual time step and at the spatial grain of anthropogenic disturbance. Additionally, we expect more accurate detection of subtle disturbances and...

  5. Fabrication of Large Bulk High Temperature Superconducting Articles

    NASA Technical Reports Server (NTRS)

    Koczor, Ronald (Inventor); Hiser, Robert A. (Inventor)

    2003-01-01

    A method of fabricating large bulk high temperature superconducting articles which comprises the steps of selecting predetermined sizes of crystalline superconducting materials and mixing these specific sizes of particles into a homogeneous mixture which is then poured into a die. The die is placed in a press and pressurized to predetermined pressure for a predetermined time and is heat treated in the furnace at predetermined temperatures for a predetermined time. The article is left in the furnace to soak at predetermined temperatures for a predetermined period of time and is oxygenated by an oxygen source during the soaking period.

  6. Large-eddy simulation of a backward facing step flow using a least-squares spectral element method

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Mittal, Rajat

    1996-01-01

    We report preliminary results obtained from the large eddy simulation of a backward facing step at a Reynolds number of 5100. The numerical platform is based on a high order Legendre spectral element spatial discretization and a least squares time integration scheme. A non-reflective outflow boundary condition is in place to minimize the effect of downstream influence. Smagorinsky model with Van Driest near wall damping is used for sub-grid scale modeling. Comparisons of mean velocity profiles and wall pressure show good agreement with benchmark data. More studies are needed to evaluate the sensitivity of this method on numerical parameters before it is applied to complex engineering problems.

  7. Simulation methods with extended stability for stiff biochemical Kinetics.

    PubMed

    Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin

    2010-08-11

    With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

  8. Large woody debris and flow resistance in step-pool channels, Cascade Range, Washington

    USGS Publications Warehouse

    Curran, Janet H.; Wohl, Ellen E.

    2003-01-01

    Total flow resistance, measured as Darcy-Weisbach f, in 20 step-pool channels with large woody debris (LWD) in Washington, ranged from 5 to 380 during summer low flows. Step risers in the study streams consist of either (1) large and relatively immobile woody debris, bedrock, or roots that form fixed, or “forced,” steps, or (2) smaller and relatively mobile wood or clasts, or a mixture of both, arranged across the channel by the stream. Flow resistance in step-pool channels may be partitioned into grain, form, and spill resistance. Grain resistance is calculated as a function of particle size, and form resistance is calculated as large woody debris drag. Combined, grain and form resistance account for less than 10% of the total flow resistance. We initially assumed that the substantial remaining portion is spill resistance attributable to steps. However, measured step characteristics could not explain between-reach variations in flow resistance. This suggests that other factors may be significant; the coefficient of variation of the hydraulic radius explained 43% of the variation in friction factors between streams, for example. Large woody debris generates form resistance on step treads and spill resistance at step risers. Because the form resistance of step-pool channels is relatively minor compared to spill resistance and because wood in steps accentuates spill resistance by increasing step height, we suggest that wood in step risers influences channel hydraulics more than wood elsewhere in the channel. Hence, the distribution and function, not just abundance, of large woody debris is critical in steep, step-pool channels.

  9. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  10. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  11. Advanced time integration algorithms for dislocation dynamics simulations of work hardening

    DOE PAGES

    Sills, Ryan B.; Aghaei, Amin; Cai, Wei

    2016-04-25

    Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less

  12. Simplified jet-A kinetic mechanism for combustor application

    NASA Technical Reports Server (NTRS)

    Lee, Chi-Ming; Kundu, Krishna; Ghorashi, Bahman

    1993-01-01

    Successful modeling of combustion and emissions in gas turbine engine combustors requires an adequate description of the reaction mechanism. For hydrocarbon oxidation, detailed mechanisms are only available for the simplest types of hydrocarbons such as methane, ethane, acetylene, and propane. These detailed mechanisms contain a large number of chemical species participating simultaneously in many elementary kinetic steps. Current computational fluid dynamic (CFD) models must include fuel vaporization, fuel-air mixing, chemical reactions, and complicated boundary geometries. To simulate these conditions a very sophisticated computer model is required, which requires large computer memory capacity and long run times. Therefore, gas turbine combustion modeling has frequently been simplified by using global reaction mechanisms, which can predict only the quantities of interest: heat release rates, flame temperature, and emissions. Jet fuels are wide-boiling-range hydrocarbons with ranges extending through those of gasoline and kerosene. These fuels are chemically complex, often containing more than 300 components. Jet fuel typically can be characterized as containing 70 vol pct paraffin compounds and 25 vol pct aromatic compounds. A five-step Jet-A fuel mechanism which involves pyrolysis and subsequent oxidation of paraffin and aromatic compounds is presented here. This mechanism is verified by comparing with Jet-A fuel ignition delay time experimental data, and species concentrations obtained from flametube experiments. This five-step mechanism appears to be better than the current one- and two-step mechanisms.

  13. Multiple time step integrators in ab initio molecular dynamics.

    PubMed

    Luehr, Nathan; Markland, Thomas E; Martínez, Todd J

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  14. Regulation of step frequency in transtibial amputee endurance athletes using a running-specific prosthesis.

    PubMed

    Oudenhoven, Laura M; Boes, Judith M; Hak, Laura; Faber, Gert S; Houdijk, Han

    2017-01-25

    Running specific prostheses (RSP) are designed to replicate the spring-like behaviour of the human leg during running, by incorporating a real physical spring in the prosthesis. Leg stiffness is an important parameter in running as it is strongly related to step frequency and running economy. To be able to select a prosthesis that contributes to the required leg stiffness of the athlete, it needs to be known to what extent the behaviour of the prosthetic leg during running is dominated by the stiffness of the prosthesis or whether it can be regulated by adaptations of the residual joints. The aim of this study was to investigate whether and how athletes with an RSP could regulate leg stiffness during distance running at different step frequencies. Seven endurance runners with an unilateral transtibial amputation performed five running trials on a treadmill at a fixed speed, while different step frequencies were imposed (preferred step frequency (PSF) and -15%, -7.5%, +7.5% and +15% of PSF). Among others, step time, ground contact time, flight time, leg stiffness and joint kinetics were measured for both legs. In the intact leg, increasing step frequency was accompanied by a decrease in both contact and flight time, while in the prosthetic leg contact time remained constant and only flight time decreased. In accordance, leg stiffness increased in the intact leg, but not in the prosthetic leg. Although a substantial contribution of the residual leg to total leg stiffness was observed, this contribution did not change considerably with changing step frequency. Amputee athletes do not seem to be able to alter prosthetic leg stiffness to regulate step frequency during running. This invariant behaviour indicates that RSP stiffness has a large effect on total leg stiffness and therefore can have an important influence on running performance. Nevertheless, since prosthetic leg stiffness was considerably lower than stiffness of the RSP, compliance of the residual leg should not be ignored when selecting RSP stiffness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model

    PubMed Central

    van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.

    2018-01-01

    The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620

  16. Nonlinear Maps for Design of Discrete Time Models of Neuronal Network Dynamics

    DTIC Science & Technology

    2016-02-29

    Performance/Technic~ 02-01-2016- 02-29-2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Nonlinear Maps for Design of Discrete -Time Models of Neuronal...neuronal model in the form of difference equations that generates neuronal states in discrete moments of time. In this approach, time step can be made...propose to use modern DSP ideas to develop new efficient approaches to the design of such discrete -time models for studies of large-scale neuronal

  17. Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6

    NASA Astrophysics Data System (ADS)

    Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.

    2017-01-01

    This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.

  18. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    DOE PAGES

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...

    2017-06-09

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less

  19. SQERTSS: Dynamic rank based throttling of transition probabilities in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya

    2017-10-01

    Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.

  20. Simulations of precipitation using the Community Earth System Model (CESM): Sensitivity to microphysics time step

    NASA Astrophysics Data System (ADS)

    Murthi, A.; Menon, S.; Sednev, I.

    2011-12-01

    An inherent difficulty in the ability of global climate models to accurately simulate precipitation lies in the use of a large time step, Δt (usually 30 minutes), to solve the governing equations. Since microphysical processes are characterized by small time scales compared to Δt, finite difference approximations used to advance microphysics equations suffer from numerical instability and large time truncation errors. With this in mind, the sensitivity of precipitation simulated by the atmospheric component of CESM, namely the Community Atmosphere Model (CAM 5.1), to the microphysics time step (τ) is investigated. Model integrations are carried out for a period of five years with a spin up time of about six months for a horizontal resolution of 2.5 × 1.9 degrees and 30 levels in the vertical, with Δt = 1800 s. The control simulation with τ = 900 s is compared with one using τ = 300 s for accumulated precipitation and radi- ation budgets at the surface and top of the atmosphere (TOA), while keeping Δt fixed. Our choice of τ = 300 s is motivated by previous work on warm rain processes wherein it was shown that a value of τ around 300 s was necessary, but not sufficient, to ensure positive definiteness and numerical stability of the explicit time integration scheme used to integrate the microphysical equations. However, since the entire suite of microphysical processes are represented in our case, we suspect that this might impose additional restrictions on τ. The τ = 300 s case produces differences in large-scale accumulated rainfall from the τ = 900 s case by as large as 200 mm, over certain regions of the globe. The spatial patterns of total accumulated precipitation using τ = 300 s are in closer agreement with satellite observed precipitation, when compared to the τ = 900 s case. Differences are also seen in the radiation budget with the τ = 300 (900) s cases producing surpluses that range between 1-3 W/m2 at both the TOA and surface in the global means. In order to gain some insight into the possible causes of the observed differences, future work would involve performing additional sensitivity tests using the single column model version of CAM 5.1 to gauge the effect of τ on calculations of source terms and mixing ratios used to calculate precipitation in the budget equations.

  1. A novel method to accurately locate and count large numbers of steps by photobleaching.

    PubMed

    Tsekouras, Konstantinos; Custer, Thomas C; Jashnsaz, Hossein; Walter, Nils G; Pressé, Steve

    2016-11-07

    Photobleaching event counting is a single-molecule fluorescence technique that is increasingly being used to determine the stoichiometry of protein and RNA complexes composed of many subunits in vivo as well as in vitro. By tagging protein or RNA subunits with fluorophores, activating them, and subsequently observing as the fluorophores photobleach, one obtains information on the number of subunits in a complex. The noise properties in a photobleaching time trace depend on the number of active fluorescent subunits. Thus, as fluorophores stochastically photobleach, noise properties of the time trace change stochastically, and these varying noise properties have created a challenge in identifying photobleaching steps in a time trace. Although photobleaching steps are often detected by eye, this method only works for high individual fluorophore emission signal-to-noise ratios and small numbers of fluorophores. With filtering methods or currently available algorithms, it is possible to reliably identify photobleaching steps for up to 20-30 fluorophores and signal-to-noise ratios down to ∼1. Here we present a new Bayesian method of counting steps in photobleaching time traces that takes into account stochastic noise variation in addition to complications such as overlapping photobleaching events that may arise from fluorophore interactions, as well as on-off blinking. Our method is capable of detecting ≥50 photobleaching steps even for signal-to-noise ratios as low as 0.1, can find up to ≥500 steps for more favorable noise profiles, and is computationally inexpensive. © 2016 Tsekouras et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  2. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  3. It pays to have a spring in your step

    PubMed Central

    Sawicki, Gregory S.; Lewis, Cara L.; Ferris, Daniel P.

    2010-01-01

    A large portion of the mechanical work required for walking comes from muscles and tendons crossing the ankle joint. By storing and releasing elastic energy in the Achilles tendon during each step, humans greatly enhance the efficiency of ankle joint work far beyond what is possible for work performed at the knee and hip joints. Summary Humans produce mechanical work at the ankle joint during walking with an efficiency two to six times greater than isolated muscle efficiency. PMID:19550204

  4. Dynamic performance of MEMS deformable mirrors for use in an active/adaptive two-photon microscope

    NASA Astrophysics Data System (ADS)

    Zhang, Christian C.; Foster, Warren B.; Downey, Ryan D.; Arrasmith, Christopher L.; Dickensheets, David L.

    2016-03-01

    Active optics can facilitate two-photon microscopic imaging deep in tissue. We are investigating fast focus control mirrors used in concert with an aberration correction mirror to control the axial position of focus and system aberrations dynamically during scanning. With an adaptive training step, sample-induced aberrations may be compensated as well. If sufficiently fast and precise, active optics may be able to compensate under-corrected imaging optics as well as sample aberrations to maintain diffraction-limited performance throughout the field of view. Toward this end we have measured a Boston Micromachines Corporation Multi-DM 140 element deformable mirror, and a Revibro Optics electrostatic 4-zone focus control mirror to characterize dynamic performance. Tests for the Multi-DM included both step response and sinusoidal frequency sweeps of specific Zernike modes. For the step response we measured 10%-90% rise times for the target Zernike amplitude, and wavefront rms error settling times. Frequency sweeps identified the 3dB bandwidth of the mirror when attempting to follow a sinusoidal amplitude trajectory for a specific Zernike mode. For five tested Zernike modes (defocus, spherical aberration, coma, astigmatism and trefoil) we find error settling times for mode amplitudes up to 400nm to be less than 52 us, and 3 dB frequencies range from 6.5 kHz to 10 kHz. The Revibro Optics mirror was tested for step response only, with error settling time of 80 μs for a large 3 um defocus step, and settling time of only 18 μs for a 400nm spherical aberration step. These response speeds are sufficient for intra-scan correction at scan rates typical of two-photon microscopy.

  5. Two Step Acceleration Process of Electrons in the Outer Van Allen Radiation Belt by Time Domain Electric Field Bursts and Large Amplitude Chorus Waves

    NASA Astrophysics Data System (ADS)

    Agapitov, O. V.; Mozer, F.; Artemyev, A.; Krasnoselskikh, V.; Lejosne, S.

    2014-12-01

    A huge number of different non-linear structures (double layers, electron holes, non-linear whistlers, etc) have been observed by the electric field experiment on the Van Allen Probes in conjunction with relativistic electron acceleration in the Earth's outer radiation belt. These structures, found as short duration (~0.1 msec) quasi-periodic bursts of electric field in the high time resolution electric field waveform, have been called Time Domain Structures (TDS). They can quite effectively interact with radiation belt electrons. Due to the trapping of electrons into these non-linear structures, they are accelerated up to ~10 keV and their pitch angles are changed, especially for low energies (˜1 keV). Large amplitude electric field perturbations cause non-linear resonant trapping of electrons into the effective potential of the TDS and these electrons are then accelerated in the non-homogeneous magnetic field. These locally accelerated electrons create the "seed population" of several keV electrons that can be accelerated by coherent, large amplitude, upper band whistler waves to MeV energies in this two step acceleration process. All the elements of this chain acceleration mechanism have been observed by the Van Allen Probes.

  6. Direct numerical simulations of premixed autoignition in compressible uniformly-sheared turbulence

    NASA Astrophysics Data System (ADS)

    Towery, Colin; Darragh, Ryan; Poludnenko, Alexei; Hamlington, Peter

    2017-11-01

    High-speed combustion systems, such as scramjet engines, operate at high temperatures and pressures, extremely short combustor residence times, very high rates of shear stress, and intense turbulent mixing. As a result, the reacting flow can be premixed and have highly-compressible turbulence fluctuations. We investigate the effects of compressible turbulence on the ignition delay time, heat-release-rate (HRR) intermittency, and mode of autoignition of premixed Hydrogen-air fuel in uniformly-sheared turbulence using new three-dimensional direct numerical simulations with a multi-step chemistry mechanism. We analyze autoignition in both the Eulerian and Lagrangian reference frames at eight different turbulence Mach numbers, Mat , spanning the quasi-isentropic, linear thermodynamic, and nonlinear compressibility regimes, with eddy shocklets appearing in the nonlinear regime. Results are compared to our previous study of premixed autoignition in isotropic turbulence at the same Mat and with a single-step reaction mechanism. This previous study found large decreases in delay times and large increases in HRR intermittency between the linear and nonlinear compressibility regimes and that detonation waves could form in both regimes.

  7. QUICR-learning for Multi-Agent Coordination

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian K.; Tumer, Kagan

    2006-01-01

    Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.

  8. Fabrication of efficient planar perovskite solar cells using a one-step chemical vapor deposition method

    PubMed Central

    Tavakoli, Mohammad Mahdi; Gu, Leilei; Gao, Yuan; Reckmeier, Claas; He, Jin; Rogach, Andrey L.; Yao, Yan; Fan, Zhiyong

    2015-01-01

    Organometallic trihalide perovskites are promising materials for photovoltaic applications, which have demonstrated a rapid rise in photovoltaic performance in a short period of time. We report a facile one-step method to fabricate planar heterojunction perovskite solar cells by chemical vapor deposition (CVD), with a solar power conversion efficiency of up to 11.1%. We performed a systematic optimization of CVD parameters such as temperature and growth time to obtain high quality films of CH3NH3PbI3 and CH3NH3PbI3-xClx perovskite. Scanning electron microscopy and time resolved photoluminescence data showed that the perovskite films have a large grain size of more than 1 micrometer, and carrier life-times of 10 ns and 120 ns for CH3NH3PbI3 and CH3NH3PbI3-xClx, respectively. This is the first demonstration of a highly efficient perovskite solar cell using one step CVD and there is likely room for significant improvement of device efficiency. PMID:26392200

  9. Protocol for Detection of Yersinia pestis in Environmental ...

    EPA Pesticide Factsheets

    Methods Report This is the first ever open-access and detailed protocol available to all government departments and agencies, and their contractors to detect Yersinia pestis, the pathogen that causes plague, from multiple environmental sample types including water. Each analytical method includes sample processing procedure for each sample type in a step-by-step manner. It includes real-time PCR, traditional microbiological culture, and the Rapid Viability PCR (RV-PCR) analytical methods. For large volume water samples it also includes an ultra-filtration-based sample concentration procedure. Because of such a non-restrictive availability of this protocol to all government departments and agencies, and their contractors, the nation will now have increased laboratory capacity to analyze large number of samples during a wide-area plague incident.

  10. An accelerated hologram calculation using the wavefront recording plane method and wavelet transform

    NASA Astrophysics Data System (ADS)

    Arai, Daisuke; Shimobaba, Tomoyoshi; Nishitsuji, Takashi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi

    2017-06-01

    Fast hologram calculation methods are critical in real-time holography applications such as three-dimensional (3D) displays. We recently proposed a wavelet transform-based hologram calculation called WASABI. Even though WASABI can decrease the calculation time of a hologram from a point cloud, it increases the calculation time with increasing propagation distance. We also proposed a wavefront recoding plane (WRP) method. This is a two-step fast hologram calculation in which the first step calculates the superposition of light waves emitted from a point cloud in a virtual plane, and the second step performs a diffraction calculation from the virtual plane to the hologram plane. A drawback of the WRP method is in the first step when the point cloud has a large number of object points and/or a long distribution in the depth direction. In this paper, we propose a method combining WASABI and the WRP method in which the drawbacks of each can be complementarily solved. Using a consumer CPU, the proposed method succeeded in performing a hologram calculation with 2048 × 2048 pixels from a 3D object with one million points in approximately 0.4 s.

  11. Core-shell TiO2@ZnO nanorods for efficient ultraviolet photodetection.

    PubMed

    Panigrahi, Shrabani; Basak, Durga

    2011-05-01

    Core-shell TiO(2)@ZnO nanorods (NRs) have been fabricated by a simple two step method: growth of ZnO NRs' array by an aqueous chemical technique and then coating of the NRs with a solution of titanium isopropoxide [Ti(OC(3)H(7))(4)] followed by a heating step to form the shell. The core-shell nanocomposites are composed of single-crystalline ZnO NRs, coated with a thin TiO(2) shell layer obtained by varying the number of coatings (one, three and five times). The ultraviolet (UV) emission intensity of the nanocomposite is largely quenched due to an efficient electron-hole separation reducing the band-to-band recombinations. The UV photoconductivity of the core-shell structure with three times TiO(2) coating has been largely enhanced due to photoelectron transfer between the core and the shell. The UV photosensitivity of the nanocomposite becomes four times larger while the photocurrent decay during steady UV illumination has been decreased almost by 7 times compared to the as-grown ZnO NRs indicating high efficiency of these core-shell structures as UV sensors. © The Royal Society of Chemistry 2011

  12. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  13. An automated workflow for parallel processing of large multiview SPIM recordings.

    PubMed

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-04-01

    Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  14. FASTPM: a new scheme for fast simulations of dark matter and haloes

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick

    2016-12-01

    We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.

  15. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  17. Compressible, multiphase semi-implicit method with moment of fluid interface representation

    DOE PAGES

    Jemison, Matthew; Sussman, Mark; Arienti, Marco

    2014-09-16

    A unified method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The deforming material boundaries are represented using the moment-of-fluid method. Our new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows including stiff materials; enabling large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Moreover, shocks are captured and material discontinuities aremore » tracked, without the aid of any approximate or exact Riemann solvers. As a result, wimulations of underwater explosions and fluid jetting in one, two, and three dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large “impedance mismatch.”« less

  18. Cost Comparison of B-1B Non-Mission-Capable Drivers Using Finite Source Queueing with Spares

    DTIC Science & Technology

    2012-09-06

    COMPARISON OF B-1B NON-MISSION-CAPABLE DRIVERS USING FINITE SOURCE QUEUEING WITH SPARES GRADUATE RESEARCH PAPER Presented to the Faculty...step into the lineup making large-number approximations unusable. Instead, a finite source queueing model including spares is incorporated...were reported as flying time accrued since last occurrence. Service time was given in both start-stop format and MX man-hours utilized. Service time was

  19. A spectral radius scaling semi-implicit iterative time stepping method for reactive flow simulations with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin

    2018-09-01

    A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.

  20. Hardware design and implementation of fast DOA estimation method based on multicore DSP

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-10-01

    In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.

  1. Real-Time Visualization of an HPF-based CFD Simulation

    NASA Technical Reports Server (NTRS)

    Kremenetsky, Mark; Vaziri, Arsi; Haimes, Robert; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Current time-dependent CFD simulations produce very large multi-dimensional data sets at each time step. The visual analysis of computational results are traditionally performed by post processing the static data on graphics workstations. We present results from an alternate approach in which we analyze the simulation data in situ on each processing node at the time of simulation. The locally analyzed results, usually more economical and in a reduced form, are then combined and sent back for visualization on a graphics workstation.

  2. Online Community Detection for Large Complex Networks

    PubMed Central

    Pan, Gang; Zhang, Wangsheng; Wu, Zhaohui; Li, Shijian

    2014-01-01

    Complex networks describe a wide range of systems in nature and society. To understand complex networks, it is crucial to investigate their community structure. In this paper, we develop an online community detection algorithm with linear time complexity for large complex networks. Our algorithm processes a network edge by edge in the order that the network is fed to the algorithm. If a new edge is added, it just updates the existing community structure in constant time, and does not need to re-compute the whole network. Therefore, it can efficiently process large networks in real time. Our algorithm optimizes expected modularity instead of modularity at each step to avoid poor performance. The experiments are carried out using 11 public data sets, and are measured by two criteria, modularity and NMI (Normalized Mutual Information). The results show that our algorithm's running time is less than the commonly used Louvain algorithm while it gives competitive performance. PMID:25061683

  3. [The stamp technique for direct composite restoration].

    PubMed

    Perrin, Philippe; Zimmerli, Brigitte; Jacky, Daniel; Lussi, Adrian; Helbling, Christoph; Ramseyer, Simon

    2013-01-01

    The indications for direct resin composite restorations are nowadays extended due to the development of modern resin materials with improved material properties. However, there are still some difficulties regarding handling of resin composite material, especially in large restorations. The reconstruction of a functional and individual occlusion is difficult to achieve with direct application techniques. The aim of the present publication was to introduce a new "stamp"-technique for placing large composite restorations. The procedure of this "stamp"-technique is presented by three typical indications: large single-tooth restoration, occlusal rehabilitation of a compromised occlusal surface due to erosions and direct fibre-reinforced fixed partial denture. A step-by-step description of the technique and clinical figures illustrates the method. Large single-tooth restorations can be built-up with individual, two- piece silicone stamps. Large occlusal abrasive and/or erosive defects can be restored by copying the wax-up from the dental technician using the "stamp"-technique. Even fiber-reinforced resin-bonded fixed partial dentures can be formed with this intraoral technique with more precision and within a shorter treatment time. The presented "stamp"-technique facilitates the placement of large restoration with composite and can be recommended for the clinical use.

  4. Closed-Loop Control of Complex Networks: A Trade-Off between Time and Energy

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Zheng; Leng, Si-Yang; Lai, Ying-Cheng; Grebogi, Celso; Lin, Wei

    2017-11-01

    Controlling complex nonlinear networks is largely an unsolved problem at the present. Existing works focus either on open-loop control strategies and their energy consumptions or on closed-loop control schemes with an infinite-time duration. We articulate a finite-time, closed-loop controller with an eye toward the physical and mathematical underpinnings of the trade-off between the control time and energy as well as their dependence on the network parameters and structure. The closed-loop controller is tested on a large number of real systems including stem cell differentiation, food webs, random ecosystems, and spiking neuronal networks. Our results represent a step forward in developing a rigorous and general framework to control nonlinear dynamical networks with a complex topology.

  5. Quantum transport with long-range steps on Watts-Strogatz networks

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Xu, Xin-Jian

    2016-07-01

    We study transport dynamics of quantum systems with long-range steps on the Watts-Strogatz network (WSN) which is generated by rewiring links of the regular ring. First, we probe physical systems modeled by the discrete nonlinear schrödinger (DNLS) equation. Using the localized initial condition, we compute the time-averaged occupation probability of the initial site, which is related to the nonlinearity, the long-range steps and rewiring links. Self-trapping transitions occur at large (small) nonlinear parameters for coupling ɛ=-1 (1), as long-range interactions are intensified. The structure disorder induced by random rewiring, however, has dual effects for ɛ=-1 and inhibits the self-trapping behavior for ɛ=1. Second, we investigate continuous-time quantum walks (CTQW) on the regular ring ruled by the discrete linear schrödinger (DLS) equation. It is found that only the presence of the long-range steps does not affect the efficiency of the coherent exciton transport, while only the allowance of random rewiring enhances the partial localization. If both factors are considered simultaneously, localization is greatly strengthened, and the transport becomes worse.

  6. Changes in the dielectric properties of a plant stem produced by the application of voltage steps

    NASA Astrophysics Data System (ADS)

    Hart, F. X.

    1983-03-01

    Time Domain Dielectric Spectroscopy (TDDS) provides a useful method for monitoring the physiological state of a biological system which may be changing with time. A voltage step is applied to a sample and the Fourier Transform of the resulting current yields the variations of the conductance, capacitance and dielectric loss of the sample with frequency (dielectric spectrum). An important question is whether the application of the voltage step itself can produce changes which obscure those of interest. Long term monitoring of the dielectric properties of plant stems requires the use of needle electrodes with relatively large current densities and field strengths at the electrode-stem interface. Steady currents on the order of those used in TDDS have been observed to modify the distribution of plant growth hormones, to produce wounding at electrode sites, and to cause stem collapse. This paper presents the preliminary results of an investigation into the effects of the application of voltage steps on the observed dielectric spectrum of the stem of the plant Coleus.

  7. What Happens to Donated Blood?

    MedlinePlus

    ... freezers for up to one year. Step Five Distribution Blood is available to be shipped to hospitals 24 hours a day, 7 days a week. Hospitals typically keep some blood units on their shelves, but may call for more at any time, such as in case of large scale emergencies. ...

  8. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  9. Introduction to multifractal detrended fluctuation analysis in matlab.

    PubMed

    Ihlen, Espen A F

    2012-01-01

    Fractal structures are found in biomedical time series from a wide range of physiological phenomena. The multifractal spectrum identifies the deviations in fractal structure within time periods with large and small fluctuations. The present tutorial is an introduction to multifractal detrended fluctuation analysis (MFDFA) that estimates the multifractal spectrum of biomedical time series. The tutorial presents MFDFA step-by-step in an interactive Matlab session. All Matlab tools needed are available in Introduction to MFDFA folder at the website www.ntnu.edu/inm/geri/software. MFDFA are introduced in Matlab code boxes where the reader can employ pieces of, or the entire MFDFA to example time series. After introducing MFDFA, the tutorial discusses the best practice of MFDFA in biomedical signal processing. The main aim of the tutorial is to give the reader a simple self-sustained guide to the implementation of MFDFA and interpretation of the resulting multifractal spectra.

  10. Introduction to Multifractal Detrended Fluctuation Analysis in Matlab

    PubMed Central

    Ihlen, Espen A. F.

    2012-01-01

    Fractal structures are found in biomedical time series from a wide range of physiological phenomena. The multifractal spectrum identifies the deviations in fractal structure within time periods with large and small fluctuations. The present tutorial is an introduction to multifractal detrended fluctuation analysis (MFDFA) that estimates the multifractal spectrum of biomedical time series. The tutorial presents MFDFA step-by-step in an interactive Matlab session. All Matlab tools needed are available in Introduction to MFDFA folder at the website www.ntnu.edu/inm/geri/software. MFDFA are introduced in Matlab code boxes where the reader can employ pieces of, or the entire MFDFA to example time series. After introducing MFDFA, the tutorial discusses the best practice of MFDFA in biomedical signal processing. The main aim of the tutorial is to give the reader a simple self-sustained guide to the implementation of MFDFA and interpretation of the resulting multifractal spectra. PMID:22675302

  11. Re-evaluation of an Optimized Second Order Backward Difference (BDF2OPT) Scheme for Unsteady Flow Applications

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Carpenter, Mark H.; Lockard, David P.

    2009-01-01

    Recent experience in the application of an optimized, second-order, backward-difference (BDF2OPT) temporal scheme is reported. The primary focus of the work is on obtaining accurate solutions of the unsteady Reynolds-averaged Navier-Stokes equations over long periods of time for aerodynamic problems of interest. The baseline flow solver under consideration uses a particular BDF2OPT temporal scheme with a dual-time-stepping algorithm for advancing the flow solutions in time. Numerical difficulties are encountered with this scheme when the flow code is run for a large number of time steps, a behavior not seen with the standard second-order, backward-difference, temporal scheme. Based on a stability analysis, slight modifications to the BDF2OPT scheme are suggested. The performance and accuracy of this modified scheme is assessed by comparing the computational results with other numerical schemes and experimental data.

  12. Automating the evaluation of flood damages: methodology and potential gains

    NASA Astrophysics Data System (ADS)

    Eleutério, Julian; Martinez, Edgar Daniel

    2010-05-01

    The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.

  13. Pin-Hole Free Perovskite Film for Solar Cells Application Prepared by Controlled Two-Step Spin-Coating Method

    NASA Astrophysics Data System (ADS)

    Bahtiar, A.; Rahmanita, S.; Inayatie, Y. D.

    2017-05-01

    Morphology of perovskite film is a key important for achieving high performance perovskite solar cells. Perovskite films are commonly prepared by two-step spin-coating method. However, pin-holes are frequently formed in perovskite films due to incomplete conversion of lead-iodide (PbI2) into perovskite CH3NH3PbI3. Pin-holes in perovskite film cause large hysteresis in current-voltage curve of solar cells due to large series resistance between perovskite layer-hole transport material. Moreover, crystal structure and grain size of perovskite crystal are also other important parameters for achieving high performance solar cells, which are significantly affected by preparation of perovskite film. We studied the effect of preparation of perovskite film using controlled spin-coating parameters on crystal structure and morphological properties of perovskite film. We used two-step spin-coating method for preparation of perovskite film with varied spinning speed, spinning time and temperature of spin-coating process to control growth of perovskite crystal aimed to produce high quality perovskite crystal with pin-hole free and large grain size. All experiment was performed in air with high humidity (larger than 80%). The best crystal structure, pin-hole free with large grain crystal size of perovskite film was obtained from film prepared at room temperature with spinning speed 1000 rpm for 20 seconds and annealed at 100°C for 300 seconds.

  14. Step Right Up and Try Boo Goo

    ERIC Educational Resources Information Center

    Lowenstein, Arlene

    1972-01-01

    Author's class of largely non-college-bound students were given practical lesson in powers of persuasion by setting up a Sell Bloo Goo" campaign in their school, the bloo goo" being a harmless colored jelle which their schoolmates were eager to buy by the time it appeared on the market. (PD)

  15. Exclusively Visual Analysis of Classroom Group Interactions

    ERIC Educational Resources Information Center

    Tucker, Laura; Scherr, Rachel E.; Zickler, Todd; Mazur, Eric

    2016-01-01

    Large-scale audiovisual data that measure group learning are time consuming to collect and analyze. As an initial step towards scaling qualitative classroom observation, we qualitatively coded classroom video using an established coding scheme with and without its audio cues. We find that interrater reliability is as high when using visual data…

  16. Evaluating simulations of daily discharge from large watersheds using autoregression and an index of flashiness

    USDA-ARS?s Scientific Manuscript database

    Watershed models are calibrated to simulate stream discharge as accurately as possible. Modelers will often calculate model validation statistics on aggregate (often monthly) time periods, rather than the daily step at which models typically operate. This is because daily hydrologic data exhibit lar...

  17. Step-off, vertical electromagnetic responses of a deep resistivity layer buried in marine sediments

    NASA Astrophysics Data System (ADS)

    Jang, Hangilro; Jang, Hannuree; Lee, Ki Ha; Kim, Hee Joon

    2013-04-01

    A frequency-domain, marine controlled-source electromagnetic (CSEM) method has been applied successfully in deep water areas for detecting hydrocarbon (HC) reservoirs. However, a typical technique with horizontal transmitters and receivers requires large source-receiver separations with respect to the target depth. A time-domain EM system with vertical transmitters and receivers can be an alternative because vertical electric fields are sensitive to deep resistive layers. In this paper, a time-domain modelling code, with multiple source and receiver dipoles that are finite in length, has been written to investigate transient EM problems. With the use of this code, we calculate step-off responses for one-dimensional HC reservoir models. Although the vertical electric field has much smaller amplitude of signal than the horizontal field, vertical currents resulting from a vertical transmitter are sensitive to resistive layers. The modelling shows a significant difference between step-off responses of HC- and water-filled reservoirs, and the contrast can be recognized at late times at relatively short offsets. A maximum contrast occurs at more than 4 s, being delayed with the depth of the HC layer.

  18. Dokan Hydropower Reservoir Operation under Stochastic Conditions as Regards the Inflows and the Energy Demands

    NASA Astrophysics Data System (ADS)

    Izat Rashed, Ghamgeen

    2018-03-01

    This paper presented a way of obtaining certain operating rules on time steps for the management of a large reservoir operation with a peak hydropower plant associated to it. The rules were allowed to have the form of non-linear regression equations which link a decision variable (here the water volume in the reservoir at the end of the time step) by several parameters influencing it. This paper considered the Dokan hydroelectric development KR-Iraq, which operation data are available for. It was showing that both the monthly average inflows and the monthly power demands are random variables. A model of deterministic dynamic programming intending the minimization of the total amount of the squares differences between the demanded energy and the generated energy is run with a multitude of annual scenarios of inflows and monthly required energies. The operating rules achieved allow the efficient and safe management of the operation and it is quietly and accurately known the forecast of the inflow and of the energy demand on the next time step.

  19. Operation of Dokan Reservoir under Stochastic Conditions as Regards the Inflows and the Energy Demands

    NASA Astrophysics Data System (ADS)

    Rashed, G. I.

    2018-02-01

    This paper presented a way of obtaining certain operating rules on time steps for the management of a large reservoir operation with a peak hydropower plant associated to it. The rules were allowed to have the form of non-linear regression equations which link a decision variable (here the water volume in the reservoir at the end of the time step) by several parameters influencing it. This paper considered the Dokan hydroelectric development KR-Iraq, which operation data are available for. It was showing that both the monthly average inflows and the monthly power demands are random variables. A model of deterministic dynamic programming intending the minimization of the total amount of the squares differences between the demanded energy and the generated energy is run with a multitude of annual scenarios of inflows and monthly required energies. The operating rules achieved allow the efficient and safe management of the operation and it is quietly and accurately known the forecast of the inflow and of the energy demand on the next time step.

  20. A point implicit time integration technique for slow transient flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-05-01

    We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less

  1. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  2. Implicit method for the computation of unsteady flows on unstructured grids

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Mavriplis, D. J.

    1995-01-01

    An implicit method for the computation of unsteady flows on unstructured grids is presented. Following a finite difference approximation for the time derivative, the resulting nonlinear system of equations is solved at each time step by using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Inviscid and viscous unsteady flows are computed to validate the procedure. The issue of the mass matrix which arises with vertex-centered finite volume schemes is addressed. The present formulation allows the mass matrix to be inverted indirectly. A mesh point movement and reconnection procedure is described that allows the grids to evolve with the motion of bodies. As an example of flow over bodies in relative motion, flow over a multi-element airfoil system undergoing deployment is computed.

  3. Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2016-06-01

    This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.

  4. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  5. The effect of cane use on the compensatory step following posterior perturbations.

    PubMed

    Hall, Courtney D; Jensen, Jody L

    2004-08-01

    The compensatory step is a critical component of the balance response and is impaired in older fallers. The purpose of this research was to examine whether utilization of a cane modified the compensatory step response following external posterior perturbations. Single subject withdrawal design was employed. Single subject statistical analysis--the standard deviation bandwidth-method--supplemented visual analysis of the data. Four older adults (range: 73-83 years) with balance impairment who habitually use a cane completed this study. Subjects received a series of sudden backward pulls that were large enough to elicit compensatory stepping. We examined the following variables both with and without cane use: timing of cane loading relative to step initiation and center of mass acceleration, stability margin, center of mass excursion and velocity, step length and width. No participant loaded the cane prior to initiation of the first compensatory step. There was no effect of cane use on the stability margin, nor was there an effect of cane use on center of mass excursion or velocity, or step length or width. These data suggest that cane use does not necessarily improve balance recovery following an external posterior perturbation when the individual is forced to rely on compensatory stepping. Instead these data suggest that the strongest factor in modifying step characteristics is experience with the perturbation.

  6. Scattering Matrix Elements for the Nonadiabatic Collision

    DTIC Science & Technology

    2010-12-01

    orthogonality relationship expressed in (77). This technique, known as the Channel Packet Method (CPM), is laid out by Weeks and Tannor [2...time and energy are Fourier transform pairs, and share the same relationship as the coordinate/momentum pairs: max min 2E t t π ∆ = − (99) As...elements, will exibit ringing. Selection of an inappropriatly large time step introduces an erroneous phase shift in the correlation funtion . This

  7. Establishment of a low recycling state with full density control by active pumping of the closed helical divertor at LHD

    NASA Astrophysics Data System (ADS)

    Motojima, G.; Masuzaki, S.; Tanaka, H.; Morisaki, T.; Sakamoto, R.; Murase, T.; Tsuchibushi, Y.; Kobayashi, M.; Schmitz, O.; Shoji, M.; Tokitani, M.; Yamada, H.; Takeiri, Y.; The LHD Experiment Group

    2018-01-01

    Superior control of particle recycling and hence full governance of plasma density has been established in the Large Helical Device (LHD) using largely enhanced active pumping of the closed helical divertor (CHD). In-vessel cryo-sorption pumping systems inside the CHD in five out of ten inner toroidal divertor sections have been developed and installed step by step in the LHD. The total effective pumping speed obtained was 67  ±  5 m3 s-1 in hydrogen, which is approximately seven times larger than previously obtained. As a result, a low recycling state was observed with CHD pumping for the first time in LHD featuring excellent density control even under intense pellet fueling conditions. A global particle confinement time (τ p* ) is used for comparison of operation with and without the CHD pumping. The τ p* was evaluated from the density decay after the fueling of hydrogen pellet injection or gas puffing in NBI plasmas. A reliably low base density before the fueling and short τ p* after the fueling were obtained during the CHD pumping, demonstrating for the first time full control of the particle balance with active pumping in the CHD.

  8. A Three-Pool Model Dissecting Readily Releasable Pool Replenishment at the Calyx of Held

    PubMed Central

    Guo, Jun; Ge, Jian-long; Hao, Mei; Sun, Zhi-cheng; Wu, Xin-sheng; Zhu, Jian-bing; Wang, Wei; Yao, Pan-tong; Lin, Wei; Xue, Lei

    2015-01-01

    Although vesicle replenishment is critical in maintaining exo-endocytosis recycling, the underlying mechanisms are not well understood. Previous studies have shown that both rapid and slow endocytosis recycle into a very large recycling pool instead of within the readily releasable pool (RRP), and the time course of RRP replenishment is slowed down by more intense stimulation. This finding contradicts the calcium/calmodulin-dependence of RRP replenishment. Here we address this issue and report a three-pool model for RRP replenishment at a central synapse. Both rapid and slow endocytosis provide vesicles to a large reserve pool (RP) ~42.3 times the RRP size. When moving from the RP to the RRP, vesicles entered an intermediate pool (IP) ~2.7 times the RRP size with slow RP-IP kinetics and fast IP-RRP kinetics, which was responsible for the well-established slow and rapid components of RRP replenishment. Depletion of the IP caused the slower RRP replenishment observed after intense stimulation. These results establish, for the first time, a realistic cycling model with all parameters measured, revealing the contribution of each cycling step in synaptic transmission. The results call for modification of the current view of the vesicle recycling steps and their roles. PMID:25825223

  9. Finite element solution of nonlinear eddy current problems with periodic excitation and its industrial applications☆

    PubMed Central

    Bíró, Oszkár; Koczka, Gergely; Preis, Kurt

    2014-01-01

    An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer. PMID:24829517

  10. Finite element solution of nonlinear eddy current problems with periodic excitation and its industrial applications.

    PubMed

    Bíró, Oszkár; Koczka, Gergely; Preis, Kurt

    2014-05-01

    An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.

  11. Steps in Solution Growth: Revised Gibbs-Thomson Law, Turbulence and Morphological Stability

    NASA Technical Reports Server (NTRS)

    Chernov, A. A.; Rashkovich, L. N.; Vekilov, P. G.

    2004-01-01

    Two groups of new phenomena revealed by AFM and high resolution optical interferometry on crystal faces growing from solutions will be discussed. 1. Spacing between strongly polygonized spiral steps with low less than 10(exp -2) kink density on lysozyme and K- biphtalate do not follow the Burton-cabrera-Frank theory. The critical length of the yet immobile first Short step segment adjacent to a pinning defect (dislocation, stacking fault) is many times longer than that following from the step free energy. The low-kink density steps are typical of many growth conditions and materials, including low temperature gas phase epitaxy and MBE. 2. The step bunching pattern on the approx. 1 cm long { 110) KDP face growing from the turbulent solution flow (Re (triple bonds) 10(exp 4), solution flow rate approx. 1 m/s) suggests that the step bunch height does not increase infinitely as the bunch path on the crystal face rises, as is usually observed on large KDP crystals. The mechanism controlling the maximal bunch width and height is based on the drag of the solution depleted by the step bunch down thc solution stream. It includes splitting, coagulation and interlacing of bunches

  12. paraGSEA: a scalable approach for large-scale gene expression profiling

    PubMed Central

    Peng, Shaoliang; Yang, Shunyun

    2017-01-01

    Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463

  13. Control Software for Piezo Stepping Actuators

    NASA Technical Reports Server (NTRS)

    Shields, Joel F.

    2013-01-01

    A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.

  14. Aerial robot intelligent control method based on back-stepping

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Xue, Qian

    2018-05-01

    The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.

  15. Toward a virtual building laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klems, J.H.; Finlayson, E.U.; Olsen, T.H.

    1999-03-01

    In order to achieve in a timely manner the large energy and dollar savings technically possible through improvements in building energy efficiency, it will be necessary to solve the problem of design failure risk. The most economical method of doing this would be to learn to calculate building performance with sufficient detail, accuracy and reliability to avoid design failure. Existing building simulation models (BSM) are a large step in this direction, but are still not capable of this level of modeling. Developments in computational fluid dynamics (CFD) techniques now allow one to construct a road map from present BSM's tomore » a complete building physical model. The most useful first step is a building interior model (BIM) that would allow prediction of local conditions affecting occupant health and comfort. To provide reliable prediction a BIM must incorporate the correct physical boundary conditions on a building interior. Doing so raises a number of specific technical problems and research questions. The solution of these within a context useful for building research and design is not likely to result from other research on CFD, which is directed toward the solution of different types of problems. A six-step plan for incorporating the correct boundary conditions within the context of the model problem of a large atrium has been outlined. A promising strategy for constructing a BIM is the overset grid technique for representing a building space in a CFD calculation. This technique promises to adapt well to building design and allows a step-by-step approach. A state-of-the-art CFD computer code using this technique has been adapted to the problem and can form the departure point for this research.« less

  16. Impurity transport in fractal media in the presence of a degrading diffusion barrier

    NASA Astrophysics Data System (ADS)

    Kondratenko, P. S.; Leonov, K. V.

    2017-08-01

    We have analyzed the transport regimes and the asymptotic forms of the impurity concentration in a randomly inhomogeneous fractal medium in the case when an impurity source is surrounded by a weakly permeable degrading barrier. The systematization of transport regimes depends on the relation between the time t 0 of emergence of impurity from the barrier and time t * corresponding to the beginning of degradation. For t 0 < t *, degradation processes are immaterial. In the opposite situation, when t 0 > t *, the results on time intervals t < t * can be formally reduced to the problem with a stationary barrier. The characteristics of regimes with t * < t < t 0 depend on the scenario of barrier degradation. For an exponentially fast scenario, the interval t * < t < t 0 is very narrow, and the transport regime occurring over time intervals t < t * passes almost jumpwise to the regime of the problem without a barrier. In the slow power-law scenario, the transport over long time interval t * < t < t 0 occurs in a new regime, which is faster as compared to the problem with a stationary barrier, but slower than in the problem without a barrier. The asymptotic form of the concentration at large distances from the source over time intervals t < t 0 has two steps, while for t > t 0, it has only one step. The more remote step for t < t 0 and the single step for t > t 0 coincide with the asymptotic form in the problem without a barrier.

  17. A training approach to improve stepping automaticity while dual-tasking in Parkinson's disease: A prospective pilot study.

    PubMed

    Chomiak, Taylor; Watts, Alexander; Meyer, Nicole; Pereira, Fernando V; Hu, Bin

    2017-02-01

    Deficits in motor movement automaticity in Parkinson's disease (PD), especially during multitasking, are early and consistent hallmarks of cognitive function decline, which increases fall risk and reduces quality of life. This study aimed to test the feasibility and potential efficacy of a wearable sensor-enabled technological platform designed for an in-home music-contingent stepping-in-place (SIP) training program to improve step automaticity during dual-tasking (DT). This was a 4-week prospective intervention pilot study. The intervention uses a sensor system and algorithm that runs off the iPod Touch which calculates step height (SH) in real-time. These measurements were then used to trigger auditory (treatment group, music; control group, radio podcast) playback in real-time through wireless headphones upon maintenance of repeated large amplitude stepping. With small steps or shuffling, auditory playback stops, thus allowing participants to use anticipatory motor control to regain positive feedback. Eleven participants were recruited from an ongoing trial (Trial Number: ISRCTN06023392). Fear of falling (FES-I), general cognitive functioning (MoCA), self-reported freezing of gait (FOG-Q), and DT step automaticity were evaluated. While we found no significant effect of training on FES-I, MoCA, or FOG-Q, we did observe a significant group (music vs podcast) by training interaction in DT step automaticity (P<0.01). Wearable device technology can be used to enable musically-contingent SIP training to increase motor automaticity for people living with PD. The training approach described here can be implemented at home to meet the growing demand for self-management of symptoms by patients.

  18. Time Investment in Drug Supply Problems by Flemish Community Pharmacies.

    PubMed

    De Weerdt, Elfi; Simoens, Steven; Casteels, Minne; Huys, Isabelle

    2017-01-01

    Introduction: Drug supply problems are a known problem for pharmacies. Community and hospital pharmacies do everything they can to minimize impact on patients. This study aims to quantify the time spent by Flemish community pharmacies on drug supply problems. Materials and Methods: During 18 weeks, employees of 25 community pharmacies filled in a template with the total time spent on drug supply problems. The template stated all the steps community pharmacies could undertake to manage drug supply problems. Results: Considering the median over the study period, the median time spent on drug supply problems was 25 min per week, with a minimum of 14 min per week and a maximum of 38 min per week. After calculating the median of each pharmacy, large differences were observed between pharmacies: about 25% spent less than 15 min per week and one-fifth spent more than 1 h per week. The steps on which community pharmacists spent most time are: (i) "check missing products from orders," (ii) "contact wholesaler/manufacturers regarding potential drug shortages," and (iii) "communicating to patients." These three steps account for about 50% of the total time spent on drug supply problems during the study period. Conclusion: Community pharmacies spend about half an hour per week on drug supply problems. Although 25 min per week does not seem that much, the time spent is not delineated and community pharmacists are constantly confronted with drug supply problems.

  19. Transport and dielectric properties of water and the influence of coarse-graining: Comparing BMW, SPC/E, and TIP3P models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, Daniel; Boresch, Stefan; Steinhauser, Othmar

    Long-term molecular dynamics simulations are used to compare the single particle dipole reorientation time, the diffusion constant, the viscosity, and the frequency-dependent dielectric constant of the coarse-grained big multipole water (BMW) model to two common atomistic three-point water models, SPC/E and TIP3P. In particular, the agreement between the calculated viscosity of BMW and the experimental viscosity of water is satisfactory. We also discuss contradictory values for the static dielectric properties reported in the literature. Employing molecular hydrodynamics, we show that the viscosity can be computed from single particle dynamics, circumventing the slow convergence of the standard approaches. Furthermore, our datamore » indicate that the Kivelson relation connecting single particle and collective reorientation time holds true for all systems investigated. Since simulations with coarse-grained force fields often employ extremely large time steps, we also investigate the influence of time step on dynamical properties. We observe a systematic acceleration of system dynamics when increasing the time step. Carefully monitoring energy/temperature conservation is found to be a sufficient criterion for the reliable calculation of dynamical properties. By contrast, recommended criteria based on the ratio of fluctuations of total vs. kinetic energy are not sensitive enough.« less

  20. Three-dimensional time dependent computation of turbulent flow

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Reynolds, W. C.; Ferziger, J. H.

    1975-01-01

    The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.

  1. IMPLEMENTATION OF THE IMPROVED QUASI-STATIC METHOD IN RATTLESNAKE/MOOSE FOR TIME-DEPENDENT RADIATION TRANSPORT MODELLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zachary M. Prince; Jean C. Ragusa; Yaqi Wang

    Because of the recent interest in reactor transient modeling and the restart of the Transient Reactor (TREAT) Facility, there has been a need for more efficient, robust methods in computation frameworks. This is the impetus of implementing the Improved Quasi-Static method (IQS) in the RATTLESNAKE/MOOSE framework. IQS has implemented with CFEM diffusion by factorizing flux into time-dependent amplitude and spacial- and weakly time-dependent shape. The shape evaluation is very similar to a flux diffusion solve and is computed at large (macro) time steps. While the amplitude evaluation is a PRKE solve where the parameters are dependent on the shape andmore » is computed at small (micro) time steps. IQS has been tested with a custom one-dimensional example and the TWIGL ramp benchmark. These examples prove it to be a viable and effective method for highly transient cases. More complex cases are intended to be applied to further test the method and its implementation.« less

  2. A coupled weather generator - rainfall-runoff approach on hourly time steps for flood risk analysis

    NASA Astrophysics Data System (ADS)

    Winter, Benjamin; Schneeberger, Klaus; Dung Nguyen, Viet; Vorogushyn, Sergiy; Huttenlau, Matthias; Merz, Bruno; Stötter, Johann

    2017-04-01

    The evaluation of potential monetary damage of flooding is an essential part of flood risk management. One possibility to estimate the monetary risk is to analyze long time series of observed flood events and their corresponding damages. In reality, however, only few flood events are documented. This limitation can be overcome by the generation of a set of synthetic, physically and spatial plausible flood events and subsequently the estimation of the resulting monetary damages. In the present work, a set of synthetic flood events is generated by a continuous rainfall-runoff simulation in combination with a coupled weather generator and temporal disaggregation procedure for the study area of Vorarlberg (Austria). Most flood risk studies focus on daily time steps, however, the mesoscale alpine study area is characterized by short concentration times, leading to large differences between daily mean and daily maximum discharge. Accordingly, an hourly time step is needed for the simulations. The hourly metrological input for the rainfall-runoff model is generated in a two-step approach. A synthetic daily dataset is generated by a multivariate and multisite weather generator and subsequently disaggregated to hourly time steps with a k-Nearest-Neighbor model. Following the event generation procedure, the negative consequences of flooding are analyzed. The corresponding flood damage for each synthetic event is estimated by combining the synthetic discharge at representative points of the river network with a loss probability relation for each community in the study area. The loss probability relation is based on exposure and susceptibility analyses on a single object basis (residential buildings) for certain return periods. For these impact analyses official inundation maps of the study area are used. Finally, by analyzing the total event time series of damages, the expected annual damage or losses associated with a certain probability of occurrence can be estimated for the entire study area.

  3. A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation

    USGS Publications Warehouse

    Smith, Peter E.

    2006-01-01

    A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.

  4. Heat exchanger life extension via in-situ reconditioning

    DOEpatents

    Holcomb, David E.; Muralidharan, Govindarajan

    2016-06-28

    A method of in-situ reconditioning a heat exchanger includes the steps of: providing an in-service heat exchanger comprising a precipitate-strengthened alloy wherein at least one mechanical property of the heat exchanger is degraded by coarsening of the precipitate, the in-service heat exchanger containing a molten salt working heat exchange fluid; deactivating the heat exchanger from service in-situ; in a solution-annealing step, in-situ heating the heat exchanger and molten salt working heat exchange fluid contained therein to a temperature and for a time period sufficient to dissolve the coarsened precipitate; in a quenching step, flowing the molten salt working heat-exchange fluid through the heat exchanger in-situ to cool the alloy and retain a supersaturated solid solution while preventing formation of large precipitates; and in an aging step, further varying the temperature of the flowing molten salt working heat-exchange fluid to re-precipitate the dissolved precipitate.

  5. Lightning electromagnetic radiation field spectra in the interval from 0.2 to 20 MHz

    NASA Technical Reports Server (NTRS)

    Willett, J. C.; Bailey, J. C.; Leteinturier, C.; Krider, E. P.

    1990-01-01

    New Fourier transforms of wideband time-domain electric fields (E) produced by lightning (recorded at the Kennedy Space Center during the summers of 1985 and 1987) were recorded in such a way that several different events in each lightning flash could be captured. Average HF spectral amplitudes for first return strokes, stepped-leader steps, and 'characteristic pulses' are given for significantly more events, at closer ranges, and with better spectral resolution than in previous literature reports. The method of recording gives less bias toward the first large event in the flash and thus yields a large sample of a wide variety of lightning processes. As a result, reliable composite spectral amplitudes are obtained for a number of different processes in cloud-to-ground lightning over the frequency interval from 0.2 to 20 MHz.

  6. The Effect of Forward-Facing Steps on Stationary Crossflow Instability Growth and Breakdown

    NASA Technical Reports Server (NTRS)

    Eppink, Jenna L.

    2018-01-01

    The e?ect of a forward-facing step on stationary cross?ow transition was studied using standard stereo particle image velocimetry (PIV) and time-resolved PIV. Step heights ranging from 53 to 71% of the boundary-layer thickness were studied in detail. The steps above a critical step height of approximately 60% of the boundary-layer thickness had a signi?cant impact on the stationary cross?ow growth downstream of the step. For the critical cases, the stationary cross?ow amplitude grew suddenly downstream of the step, decayed for a short region, then grew again. The adverse pressure gradient upstream of the step resulted in a region of cross?ow reversal. A secondary set of vortices, rotating in the opposite direction to the primary vortices, developed underneath the uplifted primary vortices. The wall-normal velocity disturbance (V' ) created by these secondary vortices impacted the step, and is believed to feed into the strong vortex that developed downstream of the step. A large but very short negative cross?ow region formed for a short region downstream of the step due to a sharp inboard curvature of the streamlines near the wall. For the larger step height cases, a cross?ow-reversal region formed just downstream of the strong negative cross?ow region. This cross?ow reversal region is believed to play an important role in the growth of the stationary cross?ow vortices downstream of the step, and may be a good indication of the critical forward-facing step height.

  7. Modeling myosin VI stepping dynamics

    NASA Astrophysics Data System (ADS)

    Tehver, Riina

    Myosin VI is a molecular motor that transports intracellular cargo as well as acts as an anchor. The motor has been measured to have unusually large step size variation and it has been reported to make both long forward and short inchworm-like forward steps, as well as step backwards. We have been developing a model that incorporates this diverse stepping behavior in a consistent framework. Our model allows us to predict the dynamics of the motor under different conditions and investigate the evolutionary advantages of the large step size variation.

  8. Expectation of an upcoming large postural perturbation influences the recovery stepping response and outcome.

    PubMed

    Pater, Mackenzie L; Rosenblatt, Noah J; Grabiner, Mark D

    2015-01-01

    Tripping during locomotion, the leading cause of falls in older adults, generally occurs without prior warning and often while performing a secondary task. Prior warning can alter the state of physiological preparedness and beneficially influence the response to the perturbation. Previous studies have examined how altering the initial "preparedness" for an upcoming perturbation can affect kinematic responses following small disturbances that did not require a stepping response to restore dynamic stability. The purpose of this study was to examine how expectation affected fall outcome and recovery response kinematics following a large, treadmill-delivered perturbation simulating a trip and requiring at least one recovery step to avoid a fall. Following the perturbation, 47% of subjects fell when they were not expecting the perturbation whereas 12% fell when they were aware that the perturbation would occur "sometime in the next minute". The between-group differences were accompanied by slower reaction times in the non-expecting group (p < 0.01). Slower reaction times were associated with kinematics that have previously been shown to increase the likelihood of falling following a laboratory-induced trip. The results demonstrate the importance of considering the context under which recovery responses are assessed, and further, gives insight to the context during which task-specific perturbation training is administered. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. 24 CFR 55.20 - Decision making process.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the affected public is largely non-English speaking. In addition, all notices must be published in an..., including public notices and an examination of practicable alternatives. The steps to be followed in the... public at the earliest possible time of a proposal to consider an action in a floodplain (or in the 500...

  10. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  11. Integrated process development-a robust, rapid method for inclusion body harvesting and processing at the microscale level.

    PubMed

    Walther, Cornelia; Kellner, Martin; Berkemeyer, Matthias; Brocard, Cécile; Dürauer, Astrid

    2017-10-21

    Escherichia coli stores large amounts of highly pure product within inclusion bodies (IBs). To take advantage of this beneficial feature, after cell disintegration, the first step to optimal product recovery is efficient IB preparation. This step is also important in evaluating upstream optimization and process development, due to the potential impact of bioprocessing conditions on product quality and on the nanoscale properties of IBs. Proper IB preparation is often neglected, due to laboratory-scale methods requiring large amounts of materials and labor. Miniaturization and parallelization can accelerate analyses of individual processing steps and provide a deeper understanding of up- and downstream processing interdependencies. Consequently, reproducible, predictive microscale methods are in demand. In the present study, we complemented a recently established high-throughput cell disruption method with a microscale method for preparing purified IBs. This preparation provided results comparable to laboratory-scale IB processing, regarding impurity depletion, and product loss. Furthermore, with this method, we performed a "design of experiments" study to demonstrate the influence of fermentation conditions on the performance of subsequent downstream steps and product quality. We showed that this approach provided a 300-fold reduction in material consumption for each fermentation condition and a 24-fold reduction in processing time for 24 samples.

  12. From Coexpression to Coregulation: An Approach to Inferring Transcriptional Regulation Among Gene Classes from Large-Scale Expression Data

    NASA Technical Reports Server (NTRS)

    Mjolsness, Eric; Castano, Rebecca; Mann, Tobias; Wold, Barbara

    2000-01-01

    We provide preliminary evidence that existing algorithms for inferring small-scale gene regulation networks from gene expression data can be adapted to large-scale gene expression data coming from hybridization microarrays. The essential steps are (I) clustering many genes by their expression time-course data into a minimal set of clusters of co-expressed genes, (2) theoretically modeling the various conditions under which the time-courses are measured using a continuous-time analog recurrent neural network for the cluster mean time-courses, (3) fitting such a regulatory model to the cluster mean time courses by simulated annealing with weight decay, and (4) analysing several such fits for commonalities in the circuit parameter sets including the connection matrices. This procedure can be used to assess the adequacy of existing and future gene expression time-course data sets for determining transcriptional regulatory relationships such as coregulation.

  13. Diffusion with stochastic resetting at power-law times.

    PubMed

    Nagar, Apoorva; Gupta, Shamik

    2016-06-01

    What happens when a continuously evolving stochastic process is interrupted with large changes at random intervals τ distributed as a power law ∼τ^{-(1+α)};α>0? Modeling the stochastic process by diffusion and the large changes as abrupt resets to the initial condition, we obtain exact closed-form expressions for both static and dynamic quantities, while accounting for strong correlations implied by a power law. Our results show that the resulting dynamics exhibits a spectrum of rich long-time behavior, from an ever-spreading spatial distribution for α<1, to one that is time independent for α>1. The dynamics has strong consequences on the time to reach a distant target for the first time; we specifically show that there exists an optimal α that minimizes the mean time to reach the target, thereby offering a step towards a viable strategy to locate targets in a crowded environment.

  14. Ship Production Symposium Held in Seattle, Washington on August 24-26, 1988 (The National Shipbuilding Research Program)

    DTIC Science & Technology

    1988-08-01

    functional area in which one of the brothers was clearly in charge was engineering. Nat was the Chief Engineer largely because John was blind from the age of...work pack- age that straddles a bulkhead during hot work on the bulkhead, knowing full well that later in time, zones that coincide with the...take the natural step of employing these concepts in large scale repair work. Decreasing work the Marine Industry always fans the flames of the age

  15. A marker-free system for the analysis of movement disabilities.

    PubMed

    Legrand, L; Marzani, F; Dusserre, L

    1998-01-01

    A major step toward improving the treatments of disabled persons may be achieved by using motion analysis equipment. We are developing such a system. It allows the analysis of plane human motion (e.g. gait) without using the tracking of markers. The system is composed of one fixed camera which acquires an image sequence of a human in motion. Then the treatment is divided into two steps: first, a large number of pixels belonging to the boundaries of the human body are extracted at each acquisition time. Secondly, a two-dimensional model of the human body, based on tapered superquadrics, is successively matched with the sets of pixels previously extracted; a specific fuzzy clustering process is used for this purpose. Moreover, an optical flow procedure gives a prediction of the model location at each acquisition time from its location at the previous time. Finally we present some results of this process applied to a leg in motion.

  16. Optimization of Advanced ACTPol Transition Edge Sensor Bolometer Operation Using R(T,I) Transition Measurements

    NASA Astrophysics Data System (ADS)

    Salatino, Maria

    2017-06-01

    In the current submm and mm cosmology experiments the focal planes are populated by kilopixel transition edge sensors (TESes). Varying incoming power load requires frequent rebiasing of the TESes through standard current-voltage (IV) acquisition. The time required to perform IVs on such large arrays and the resulting transient heating of the bath reduces the sky observation time. We explore a bias step method that significantly reduces the time required for the rebiasing process. This exploits the detectors' responses to the injection of a small square wave signal on top of the dc bias current and knowledge of the shape of the detector transition R(T,I). This method has been tested on two detector arrays of the Atacama Cosmology Telescope (ACT). In this paper, we focus on the first step of the method, the estimate of the TES %Rn.

  17. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.

  18. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE PAGES

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-10-20

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  19. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  20. Implementation of a web-based medication tracking system in a large academic medical center.

    PubMed

    Calabrese, Sam V; Williams, Jonathan P

    2012-10-01

    Pharmacy workflow efficiencies achieved through the use of an electronic medication-tracking system are described. Medication dispensing turnaround times at the inpatient pharmacy of a large hospital were evaluated before and after transition from manual medication tracking to a Web-based tracking process involving sequential bar-code scanning and real-time monitoring of medication status. The transition was carried out in three phases: (1) a workflow analysis, including the identification of optimal points for medication scanning with hand-held wireless devices, (2) the phased implementation of an automated solution and associated hardware at a central dispensing pharmacy and three satellite locations, and (3) postimplementation data collection to evaluate the impact of the new tracking system and areas for improvement. Relative to the manual tracking method, electronic medication tracking allowed the capture of far more data points, enabling the pharmacy team to delineate the time required for each step of the medication dispensing process and to identify the steps most likely to involve delays. A comparison of baseline and postimplementation data showed substantial reductions in overall medication turnaround times with the use of the Web-based tracking system (time reductions of 45% and 22% at the central and satellite sites, respectively). In addition to more accurate projections and documentation of turnaround times, the Web-based tracking system has facilitated quality-improvement initiatives. Implementation of an electronic tracking system for monitoring the delivery of medications provided a comprehensive mechanism for calculating turnaround times and allowed the pharmacy to identify bottlenecks within the medication distribution system. Altering processes removed these bottlenecks and decreased delivery turnaround times.

  1. Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.

    PubMed

    Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie

    2010-07-01

    Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Scholarly Context Adrift: Three out of Four URI References Lead to Changed Content

    PubMed Central

    Tobin, Richard; Grover, Claire

    2016-01-01

    Increasingly, scholarly articles contain URI references to “web at large” resources including project web sites, scholarly wikis, ontologies, online debates, presentations, blogs, and videos. Authors reference such resources to provide essential context for the research they report on. A reader who visits a web at large resource by following a URI reference in an article, some time after its publication, is led to believe that the resource’s content is representative of what the author originally referenced. However, due to the dynamic nature of the web, that may very well not be the case. We reuse a dataset from a previous study in which several authors of this paper were involved, and investigate to what extent the textual content of web at large resources referenced in a vast collection of Science, Technology, and Medicine (STM) articles published between 1997 and 2012 has remained stable since the publication of the referencing article. We do so in a two-step approach that relies on various well-established similarity measures to compare textual content. In a first step, we use 19 web archives to find snapshots of referenced web at large resources that have textual content that is representative of the state of the resource around the time of publication of the referencing paper. We find that representative snapshots exist for about 30% of all URI references. In a second step, we compare the textual content of representative snapshots with that of their live web counterparts. We find that for over 75% of references the content has drifted away from what it was when referenced. These results raise significant concerns regarding the long term integrity of the web-based scholarly record and call for the deployment of techniques to combat these problems. PMID:27911955

  3. A training approach to improve stepping automaticity while dual-tasking in Parkinson's disease

    PubMed Central

    Chomiak, Taylor; Watts, Alexander; Meyer, Nicole; Pereira, Fernando V.; Hu, Bin

    2017-01-01

    Abstract Background: Deficits in motor movement automaticity in Parkinson's disease (PD), especially during multitasking, are early and consistent hallmarks of cognitive function decline, which increases fall risk and reduces quality of life. This study aimed to test the feasibility and potential efficacy of a wearable sensor-enabled technological platform designed for an in-home music-contingent stepping-in-place (SIP) training program to improve step automaticity during dual-tasking (DT). Methods: This was a 4-week prospective intervention pilot study. The intervention uses a sensor system and algorithm that runs off the iPod Touch which calculates step height (SH) in real-time. These measurements were then used to trigger auditory (treatment group, music; control group, radio podcast) playback in real-time through wireless headphones upon maintenance of repeated large amplitude stepping. With small steps or shuffling, auditory playback stops, thus allowing participants to use anticipatory motor control to regain positive feedback. Eleven participants were recruited from an ongoing trial (Trial Number: ISRCTN06023392). Fear of falling (FES-I), general cognitive functioning (MoCA), self-reported freezing of gait (FOG-Q), and DT step automaticity were evaluated. Results: While we found no significant effect of training on FES-I, MoCA, or FOG-Q, we did observe a significant group (music vs podcast) by training interaction in DT step automaticity (P<0.01). Conclusion: Wearable device technology can be used to enable musically-contingent SIP training to increase motor automaticity for people living with PD. The training approach described here can be implemented at home to meet the growing demand for self-management of symptoms by patients. PMID:28151878

  4. Fully implicit adaptive mesh refinement solver for 2D MHD

    NASA Astrophysics Data System (ADS)

    Philip, B.; Chacon, L.; Pernice, M.

    2008-11-01

    Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)

  5. DSP-Based dual-polarity mass spectrum pattern recognition for bio-detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V; Coffee, K; Gard, E

    2006-04-21

    The Bio-Aerosol Mass Spectrometry (BAMS) instrument analyzes single aerosol particles using a dual-polarity time-of-flight mass spectrometer recording simultaneously spectra of thirty to a hundred thousand points on each polarity. We describe here a real-time pattern recognition algorithm developed at Lawrence Livermore National Laboratory that has been implemented on a nine Digital Signal Processor (DSP) system from Signatec Incorporated. The algorithm first preprocesses independently the raw time-of-flight data through an adaptive baseline removal routine. The next step consists of a polarity dependent calibration to a mass-to-charge representation, reducing the data to about five hundred to a thousand channels per polarity. Themore » last step is the identification step using a pattern recognition algorithm based on a library of known particle signatures including threat agents and background particles. The identification step includes integrating the two polarities for a final identification determination using a score-based rule tree. This algorithm, operating on multiple channels per-polarity and multiple polarities, is well suited for parallel real-time processing. It has been implemented on the PMP8A from Signatec Incorporated, which is a computer based board that can interface directly to the two one-Giga-Sample digitizers (PDA1000 from Signatec Incorporated) used to record the two polarities of time-of-flight data. By using optimized data separation, pipelining, and parallel processing across the nine DSPs it is possible to achieve a processing speed of up to a thousand particles per seconds, while maintaining the recognition rate observed on a non-real time implementation. This embedded system has allowed the BAMS technology to improve its throughput and therefore its sensitivity while maintaining a large dynamic range (number of channels and two polarities) thus maintaining the systems specificity for bio-detection.« less

  6. Single-crossover recombination in discrete time.

    PubMed

    von Wangenheim, Ute; Baake, Ellen; Baake, Michael

    2010-05-01

    Modelling the process of recombination leads to a large coupled nonlinear dynamical system. Here, we consider a particular case of recombination in discrete time, allowing only for single crossovers. While the analogous dynamics in continuous time admits a closed solution (Baake and Baake in Can J Math 55:3-41, 2003), this no longer works for discrete time. A more general model (i.e. without the restriction to single crossovers) has been studied before (Bennett in Ann Hum Genet 18:311-317, 1954; Dawson in Theor Popul Biol 58:1-20, 2000; Linear Algebra Appl 348:115-137, 2002) and was solved algorithmically by means of Haldane linearisation. Using the special formalism introduced by Baake and Baake (Can J Math 55:3-41, 2003), we obtain further insight into the single-crossover dynamics and the particular difficulties that arise in discrete time. We then transform the equations to a solvable system in a two-step procedure: linearisation followed by diagonalisation. Still, the coefficients of the second step must be determined in a recursive manner, but once this is done for a given system, they allow for an explicit solution valid for all times.

  7. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  8. Large-eddy simulations with wall models

    NASA Technical Reports Server (NTRS)

    Cabot, W.

    1995-01-01

    The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.

  9. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware

    PubMed Central

    Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.

    2016-01-01

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061

  10. A progress report on estuary modeling by the finite-element method

    USGS Publications Warehouse

    Gray, William G.

    1978-01-01

    Various schemes are investigated for finite-element modeling of two-dimensional surface-water flows. The first schemes investigated combine finite-element spatial discretization with split-step time stepping schemes that have been found useful in finite-difference computations. Because of the large number of numerical integrations performed in space and the large sparse matrices solved, these finite-element schemes were found to be economically uncompetitive with finite-difference schemes. A very promising leapfrog scheme is proposed which, when combined with a novel very fast spatial integration procedure, eliminates the need to solve any matrices at all. Additional problems attacked included proper propagation of waves and proper specification of the normal flow-boundary condition. This report indicates work in progress and does not come to a definitive conclusion as to the best approach for finite-element modeling of surface-water problems. The results presented represent findings obtained between September 1973 and July 1976. (Woodard-USGS)

  11. "Silicon millefeuille": From a silicon wafer to multiple thin crystalline films in a single step

    NASA Astrophysics Data System (ADS)

    Hernández, David; Trifonov, Trifon; Garín, Moisés; Alcubilla, Ramon

    2013-04-01

    During the last years, many techniques have been developed to obtain thin crystalline films from commercial silicon ingots. Large market applications are foreseen in the photovoltaic field, where important cost reductions are predicted, and also in advanced microelectronics technologies as three-dimensional integration, system on foil, or silicon interposers [Dross et al., Prog. Photovoltaics 20, 770-784 (2012); R. Brendel, Thin Film Crystalline Silicon Solar Cells (Wiley-VCH, Weinheim, Germany 2003); J. N. Burghartz, Ultra-Thin Chip Technology and Applications (Springer Science + Business Media, NY, USA, 2010)]. Existing methods produce "one at a time" silicon layers, once one thin film is obtained, the complete process is repeated to obtain the next layer. Here, we describe a technology that, from a single crystalline silicon wafer, produces a large number of crystalline films with controlled thickness in a single technological step.

  12. Underground structure pattern and multi AO reaction with step feed concept for upgrading an large wastewater treatment plant

    NASA Astrophysics Data System (ADS)

    Peng, Yi; Zhang, Jie; Li, Dong

    2018-03-01

    A large wastewater treatment plant (WWTP) could not meet the new demand of urban environment and the need of reclaimed water in China, using a US treatment technology. Thus a multi AO reaction process (Anaerobic/oxic/anoxic/oxic/anoxic/oxic) WWTP with underground structure was proposed to carry out the upgrade project. Four main new technologies were applied: (1) multi AO reaction with step feed technology; (2) deodorization; (3) new energy-saving technology such as water resource heat pump and optical fiber lighting system; (4) dependable old WWTP’s water quality support measurement during new WWTP’s construction. After construction, upgrading WWTP had saved two thirds land occupation, increased 80% treatment capacity and improved effluent standard by more than two times. Moreover, it had become a benchmark of an ecological negative capital changing to a positive capital.

  13. Risk as a Resource - A New Paradigm

    NASA Technical Reports Server (NTRS)

    Gindorf, Thomas E.

    1996-01-01

    NASA must change dramatically because of the current United States federal budget climate. The American people and their elected officials have mandated a smaller, more efficient and effective government. For the past decade, NASA's budget had grown at or slightly above the rate of inflation. In that era, taking all steps to avoid the risk of failure was the rule. Spacecraft development was characterized by extensive analyses, numerous reviews, and multiple conservative tests. This methodology was consistent with the long available schedules for developing hardware and software for very large, billion dollar spacecraft. Those days are over. The time when every identifiable step was taken to avoid risk is being replaced by a new paradigm which manages risk in much the same way as other resources (schedule, performance, or dollars) are managed. While success is paramount to survival, it can no longer be bought with a large growing NASA budget.

  14. Communicate with Parents in 7 Simple Steps

    ERIC Educational Resources Information Center

    Balsley, Jessica

    2011-01-01

    The relationship between art teachers and students' families is a complex one. Because they typically see a large volume of students in a short amount of time, art teachers are rarely the first line of communication when it comes to connecting with families on a regular basis. Most often, communication only occurs when the teacher has great news…

  15. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction

    Treesearch

    Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul

    2016-01-01

    Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...

  16. Unmanned Air Vehicle -Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fred Oppel, SNL 06134

    2013-04-17

    This package contains modules that model the mobility of systems such as helicopters and fixed wing flying in the air. This package currently models first order physics - basically a velocity integrator. UAV mobility uses an internal clock to maintain stable, high-fidelity simulations over large time steps This package depends on interface that reside in the Mobility package.

  17. Analysis of Cocaine, Heroin, and their Metabolites in Saliva

    DTIC Science & Technology

    1990-07-10

    in Table 1. Table 1 - HPLC Conditions Column: Alltech /Applied Science Econosphere C8, 250 x 4.6 mm Solvent A: 0.1M Ammonium Acetate Solvent B: 10:90...concentration step is quite time consuming and often results in large losses of sample. LC/MS has the advantage over GC/MS in allowing the analysis of

  18. An optimal control strategy for hybrid actuator systems: Application to an artificial muscle with electric motor assist.

    PubMed

    Ishihara, Koji; Morimoto, Jun

    2018-03-01

    Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. Time Investment in Drug Supply Problems by Flemish Community Pharmacies

    PubMed Central

    De Weerdt, Elfi; Simoens, Steven; Casteels, Minne; Huys, Isabelle

    2017-01-01

    Introduction: Drug supply problems are a known problem for pharmacies. Community and hospital pharmacies do everything they can to minimize impact on patients. This study aims to quantify the time spent by Flemish community pharmacies on drug supply problems. Materials and Methods: During 18 weeks, employees of 25 community pharmacies filled in a template with the total time spent on drug supply problems. The template stated all the steps community pharmacies could undertake to manage drug supply problems. Results: Considering the median over the study period, the median time spent on drug supply problems was 25 min per week, with a minimum of 14 min per week and a maximum of 38 min per week. After calculating the median of each pharmacy, large differences were observed between pharmacies: about 25% spent less than 15 min per week and one-fifth spent more than 1 h per week. The steps on which community pharmacists spent most time are: (i) “check missing products from orders,” (ii) “contact wholesaler/manufacturers regarding potential drug shortages,” and (iii) “communicating to patients.” These three steps account for about 50% of the total time spent on drug supply problems during the study period. Conclusion: Community pharmacies spend about half an hour per week on drug supply problems. Although 25 min per week does not seem that much, the time spent is not delineated and community pharmacists are constantly confronted with drug supply problems. PMID:28878679

  20. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  1. Discrete transparent boundary conditions for the mixed KDV-BBM equation

    NASA Astrophysics Data System (ADS)

    Besse, Christophe; Noble, Pascal; Sanchez, David

    2017-09-01

    In this paper, we consider artificial boundary conditions for the linearized mixed Korteweg-de Vries (KDV) and Benjamin-Bona-Mahoney (BBM) equation which models water waves in the small amplitude, large wavelength regime. Continuous (respectively discrete) artificial boundary conditions involve non local operators in time which in turn requires to compute time convolutions and invert the Laplace transform of an analytic function (respectively the Z-transform of an holomorphic function). In this paper, we propose a new, stable and fairly general strategy to carry out this crucial step in the design of transparent boundary conditions. For large time simulations, we also introduce a methodology based on the asymptotic expansion of coefficients involved in exact direct transparent boundary conditions. We illustrate the accuracy of our methods for Gaussian and wave packets initial data.

  2. An asymptotic-preserving Lagrangian algorithm for the time-dependent anisotropic heat transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.

    2014-09-01

    We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less

  3. The influence of age on gait parameters during the transition from a wide to a narrow pathway.

    PubMed

    Shkuratova, Nataliya; Taylor, Nicholas

    2008-06-01

    The ability to negotiate pathways of different widths is a prerequisite of daily living. However, only a few studies have investigated changes in gait parameters in response to walking on narrow pathways. The aim of this study is to examine the influence of age on gait adjustments during the transition from a wide to a narrow pathway. Two-group repeated measures design. Gait Laboratory. Twenty healthy older participants (mean [M] = 74.3 years, Standard deviation [SD] = 7.2 years); 20 healthy young participants (M = 26.6 years, SD = 6.1 years). Making the transition from walking on a wide pathway (68 cm) to walking on a narrow pathway (15 cm). Step length, step time, step width, double support time and base of support. Healthy older participants were able to make the transition from a wide to a narrow pathway successfully. There was only one significant interaction, between age and base of support (p < 0.003). Older adults decreased their base of support only when negotiating the transition step, while young participants started decreasing their base of support prior to the negotiation of transition step (p < 0.01). Adjustments to the transition from a wide to a narrow pathway are largely unaffected by normal ageing. Difficulties in making the transition to a narrow pathway during walking should not be attributed to normal age-related changes. (c) 2008 John Wiley & Sons, Ltd.

  4. Testing a stepped care model for binge-eating disorder: a two-step randomized controlled trial.

    PubMed

    Tasca, Giorgio A; Koszycki, Diana; Brugnera, Agostino; Chyurlia, Livia; Hammond, Nicole; Francis, Kylie; Ritchie, Kerri; Ivanova, Iryna; Proulx, Genevieve; Wilson, Brian; Beaulac, Julie; Bissada, Hany; Beasley, Erin; Mcquaid, Nancy; Grenon, Renee; Fortin-Langelier, Benjamin; Compare, Angelo; Balfour, Louise

    2018-05-24

    A stepped care approach involves patients first receiving low-intensity treatment followed by higher intensity treatment. This two-step randomized controlled trial investigated the efficacy of a sequential stepped care approach for the psychological treatment of binge-eating disorder (BED). In the first step, all participants with BED (n = 135) received unguided self-help (USH) based on a cognitive-behavioral therapy model. In the second step, participants who remained in the trial were randomized either to 16 weeks of group psychodynamic-interpersonal psychotherapy (GPIP) (n = 39) or to a no-treatment control condition (n = 46). Outcomes were assessed for USH in step 1, and then for step 2 up to 6-months post-treatment using multilevel regression slope discontinuity models. In the first step, USH resulted in large and statistically significant reductions in the frequency of binge eating. Statistically significant moderate to large reductions in eating disorder cognitions were also noted. In the second step, there was no difference in change in frequency of binge eating between GPIP and the control condition. Compared with controls, GPIP resulted in significant and large improvement in attachment avoidance and interpersonal problems. The findings indicated that a second step of a stepped care approach did not significantly reduce binge-eating symptoms beyond the effects of USH alone. The study provided some evidence for the second step potentially to reduce factors known to maintain binge eating in the long run, such as attachment avoidance and interpersonal problems.

  5. Negative refraction and planar focusing based on parity-time symmetric metasurfaces.

    PubMed

    Fleury, Romain; Sounas, Dimitrios L; Alù, Andrea

    2014-07-11

    We introduce a new mechanism to realize negative refraction and planar focusing using a pair of parity-time symmetric metasurfaces. In contrast to existing solutions that achieve these effects with negative-index metamaterials or phase conjugating surfaces, the proposed parity-time symmetric lens enables loss-free, all-angle negative refraction and planar focusing in free space, without relying on bulk metamaterials or nonlinear effects. This concept may represent a pivotal step towards loss-free negative refraction and highly efficient planar focusing by exploiting the largely uncharted scattering properties of parity-time symmetric systems.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell E. Feder and Mahmoud Z. Youssef

    Neutronics analysis to find nuclear heating rates and personnel dose rates were conducted in support of the integration of diagnostics in to the ITER Upper Port Plugs. Simplified shielding models of the Visible-Infrared diagnostic and of a large aperture diagnostic were incorporated in to the ITER global CAD model. Results for these systems are representative of typical designs with maximum shielding and a small aperture (Vis-IR) and minimal shielding with a large aperture. The neutronics discrete-ordinates code ATTILA® and SEVERIAN® (the ATTILA parallel processing version) was used. Material properties and the 500 MW D-T volume source were taken from themore » ITER “Brand Model” MCNP benchmark model. A biased quadrature set equivelant to Sn=32 and a scattering degree of Pn=3 were used along with a 46-neutron and 21-gamma FENDL energy subgrouping. Total nuclear heating (neutron plug gamma heating) in the upper port plugs ranged between 380 and 350 kW for the Vis-IR and Large Aperture cases. The Large Aperture model exhibited lower total heating but much higher peak volumetric heating on the upper port plug structure. Personnel dose rates are calculated in a three step process involving a neutron-only transport calculation, the generation of activation volume sources at pre-defined time steps and finally gamma transport analyses are run for selected time steps. ANSI-ANS 6.1.1 1977 Flux-to-Dose conversion factors were used. Dose rates were evaluated for 1 full year of 500 MW DT operation which is comprised of 3000 1800-second pulses. After one year the machine is shut down for maintenance and personnel are permitted to access the diagnostic interspace after 2-weeks if dose rates are below 100 μSv/hr. Dose rates in the Visible-IR diagnostic model after one day of shutdown were 130 μSv/hr but fell below the limit to 90 μSv/hr 2-weeks later. The Large Aperture style shielding model exhibited higher and more persistent dose rates. After 1-day the dose rate was 230 μSv/hr but was still at 120 μSv/hr 4-weeks later.« less

  7. Future aircraft networks and schedules

    NASA Astrophysics Data System (ADS)

    Shu, Yan

    2011-07-01

    Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents computational results of these large-scale instances. To validate the models and solution algorithms developed, this thesis also compares the daily flight schedules that it designs with the schedules of the existing airlines. Furthermore, it creates instances that represent different economic and fuel-prices conditions and derives schedules under these different conditions. In addition, it discusses the implication of using new aircraft in the future flight schedules. Finally, future research in three areas---model, computational method, and simulation for validation---is proposed.

  8. On-line LC-MS approach combining collision-induced dissociation (CID), electron-transfer dissociation (ETD), and CID of an isolated charge-reduced species for the trace-level characterization of proteins with post-translational modifications.

    PubMed

    Wu, Shiaw-Lin; Hühmer, Andreas F R; Hao, Zhiqi; Karger, Barry L

    2007-11-01

    We have expanded our recent on-line LC-MS platform for large peptide analysis to combine collision-induced dissociation (CID), electron-transfer dissociation (ETD), and CID of an isolated charge-reduced (CRCID) species derived from ETD to determine sites of phosphorylation and glycosylation modifications, as well as the sequence of large peptide fragments (i.e., 2000-10,000 Da) from complex proteins, such as beta-casein, epidermal growth factor receptor (EGFR), and tissue plasminogen activator (t-PA) at the low femtomol level. The incorporation of an additional CID activation step for a charge-reduced species, isolated from ETD fragment ions, improved ETD fragmentation when precursor ions with high m/z (approximately >1000) were automatically selected for fragmentation. Specifically, the identification of the exact phosphorylation sites was strengthened by the extensive coverage of the peptide sequence with a near-continuous product ion series. The identification of N-linked glycosylation sites in EGFR and an O-linked glycosylation site in t-PA were also improved through the enhanced identification of the peptide backbone sequence of the glycosylated precursors. The new strategy is a good starting survey scan to characterize enzymatic peptide mixtures over a broad range of masses using LC-MS with data-dependent acquisition, as the three activation steps can provide complementary information to each other. In general, large peptides can be extensively characterized by the ETD and CRCID steps, including sites of modification from the generated, near-continuous product ion series, supplemented by the CID-MS2 step. At the same time, small peptides (e.g.,

  9. The effects of age and step length on joint kinematics and kinetics of large out-and-back steps.

    PubMed

    Schulz, Brian W; Ashton-Miller, James A; Alexander, Neil B

    2008-06-01

    Maximum step length (MSL) is a clinical test that has been shown to correlate with age, various measures of fall risk, and knee and hip joint extension speed, strength, and power capacities, but little is known about the kinematics and kinetics of the large out-and-back step utilized. Body motions and ground reaction forces were recorded for 11 unimpaired younger and 10 older women while attaining maximum step length. Joint kinematics and kinetics were calculated using inverse dynamics. The effects of age group and step length on the biomechanics of these large out-and-back steps were determined. Maximum step length was 40% greater in the younger than in the older women (P<0.0001). Peak knee and hip, but not ankle, angle, velocity, moment, and power were generally greater for younger women and longer steps. After controlling for age group, step length generally explained significant additional variance in hip and torso kinematics and kinetics (incremental R2=0.09-0.37). The young reached their peak knee extension moment immediately after landing of the step out, while the old reached their peak knee extension moment just before the return step liftoff (P=0.03). Maximum step length is strongly associated with hip kinematics and kinetics. Delays in peak knee extension moment that appear to be unrelated to step length, may indicate a reduced ability of older women to rapidly apply force to the ground with the stepping leg and thus arrest the momentum of a fall.

  10. The effects of age and step length on joint kinematics and kinetics of large out-and-back steps

    PubMed Central

    Schulz, Brian W.; Ashton-Miller, James A.; Alexander, Neil B.

    2008-01-01

    Background Maximum Step Length is a clinical test that has been shown to correlate with age, various measures of fall risk, and knee and hip joint extension speed, strength, and power capacities, but little is known about the kinematics and kinetics of the large out-and-back step utilized. Methods Body motions and ground reaction forces were recorded for 11 unimpaired younger and 10 older women while attaining Maximum Step Length. Joint kinematics and kinetics were calculated using inverse dynamics. The effects of age group and step length on the biomechanics of these large out-and-back steps were determined. Findings Maximum Step Length was 40% greater in the younger than in the older women (p<0.0001). Peak knee and hip, but not ankle, angle, velocity, moment, and power were generally greater for younger women and longer steps. After controlling for age group, step length generally explained significant additional variance in hip and torso kinematics and kinetics (incremental R2=0.09–0.37). The young reached their peak knee extension moment immediately after landing of the step out, while the old reached their peak knee extension moment just before the return step lift off (p=0.03). Interpretation Maximum Step Length is strongly associated with hip kinematics and kinetics. Delays in peak knee extension moment that appear to be unrelated to step length, may indicate a reduced ability of older women to rapidly apply force to the ground with the stepping leg and thus arrest the momentum of a fall. PMID:18308435

  11. Single-particle stochastic heat engine.

    PubMed

    Rana, Shubhashis; Pal, P S; Saha, Arnab; Jayannavar, A M

    2014-10-01

    We have performed an extensive analysis of a single-particle stochastic heat engine constructed by manipulating a Brownian particle in a time-dependent harmonic potential. The cycle consists of two isothermal steps at different temperatures and two adiabatic steps similar to that of a Carnot engine. The engine shows qualitative differences in inertial and overdamped regimes. All the thermodynamic quantities, including efficiency, exhibit strong fluctuations in a time periodic steady state. The fluctuations of stochastic efficiency dominate over the mean values even in the quasistatic regime. Interestingly, our system acts as an engine provided the temperature difference between the two reservoirs is greater than a finite critical value which in turn depends on the cycle time and other system parameters. This is supported by our analytical results carried out in the quasistatic regime. Our system works more reliably as an engine for large cycle times. By studying various model systems, we observe that the operational characteristics are model dependent. Our results clearly rule out any universal relation between efficiency at maximum power and temperature of the baths. We have also verified fluctuation relations for heat engines in time periodic steady state.

  12. Automated segmentation of linear time-frequency representations of marine-mammal sounds.

    PubMed

    Dadouchi, Florian; Gervaise, Cedric; Ioana, Cornel; Huillery, Julien; Mars, Jérôme I

    2013-09-01

    Many marine mammals produce highly nonlinear frequency modulations. Determining the time-frequency support of these sounds offers various applications, which include recognition, localization, and density estimation. This study introduces a low parameterized automated spectrogram segmentation method that is based on a theoretical probabilistic framework. In the first step, the background noise in the spectrogram is fitted with a Chi-squared distribution and thresholded using a Neyman-Pearson approach. In the second step, the number of false detections in time-frequency regions is modeled as a binomial distribution, and then through a Neyman-Pearson strategy, the time-frequency bins are gathered into regions of interest. The proposed method is validated on real data of large sequences of whistles from common dolphins, collected in the Bay of Biscay (France). The proposed method is also compared with two alternative approaches: the first is smoothing and thresholding of the spectrogram; the second is thresholding of the spectrogram followed by the use of morphological operators to gather the time-frequency bins and to remove false positives. This method is shown to increase the probability of detection for the same probability of false alarms.

  13. Re-Organizing Earth Observation Data Storage to Support Temporal Analysis of Big Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher

    2017-01-01

    The Earth Observing System Data and Information System archives many datasets that are critical to understanding long-term variations in Earth science properties. Thus, some of these are large, multi-decadal datasets. Yet the challenge in long time series analysis comes less from the sheer volume than the data organization, which is typically one (or a small number of) time steps per file. The overhead of opening and inventorying complex, API-driven data formats such as Hierarchical Data Format introduces a small latency at each time step, which nonetheless adds up for datasets with O(10^6) single-timestep files. Several approaches to reorganizing the data can mitigate this overhead by an order of magnitude: pre-aggregating data along the time axis (time-chunking); storing the data in a highly distributed file system; or storing data in distributed columnar databases. Storing a second copy of the data incurs extra costs, so some selection criteria must be employed, which would be driven by expected or actual usage by the end user community, balanced against the extra cost.

  14. Re-organizing Earth Observation Data Storage to Support Temporal Analysis of Big Data

    NASA Astrophysics Data System (ADS)

    Lynnes, C.

    2017-12-01

    The Earth Observing System Data and Information System archives many datasets that are critical to understanding long-term variations in Earth science properties. Thus, some of these are large, multi-decadal datasets. Yet the challenge in long time series analysis comes less from the sheer volume than the data organization, which is typically one (or a small number of) time steps per file. The overhead of opening and inventorying complex, API-driven data formats such as Hierarchical Data Format introduces a small latency at each time step, which nonetheless adds up for datasets with O(10^6) single-timestep files. Several approaches to reorganizing the data can mitigate this overhead by an order of magnitude: pre-aggregating data along the time axis (time-chunking); storing the data in a highly distributed file system; or storing data in distributed columnar databases. Storing a second copy of the data incurs extra costs, so some selection criteria must be employed, which would be driven by expected or actual usage by the end user community, balanced against the extra cost.

  15. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  16. LYSO-based precision timing detectors with SiPM readout

    NASA Astrophysics Data System (ADS)

    Bornheim, A.; Hassanshahi, M. H.; Griffioen, M.; Mao, J.; Mangu, A.; Peña, C.; Spiropulu, M.; Xie, S.; Zhang, Z.

    2018-07-01

    Particle detectors based on scintillation light are particularly well suited for precision timing applications with resolutions of a few 10's of ps. The large primary signal and the initial rise time of the scintillation light result in very favorable signal-to-noise conditions with fast signals. In this paper we describe timing studies using a LYSO-based sampling calorimeter with wavelength-shifting capillary light extraction and silicon photomultipliers as photosensors. We study the contributions of various steps of the signal generation to the total time resolution, and demonstrate its feasibility as a radiation-hard technology for calorimeters at high intensity hadron colliders.

  17. Numerical simulation of pseudoelastic shape memory alloys using the large time increment method

    NASA Astrophysics Data System (ADS)

    Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad

    2017-04-01

    The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.

  18. Model predictive control design for polytopic uncertain systems by synthesising multi-step prediction scenarios

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Xi, Yugeng; Li, Dewei; Xu, Yuli; Gan, Zhongxue

    2018-01-01

    A common objective of model predictive control (MPC) design is the large initial feasible region, low online computational burden as well as satisfactory control performance of the resulting algorithm. It is well known that interpolation-based MPC can achieve a favourable trade-off among these different aspects. However, the existing results are usually based on fixed prediction scenarios, which inevitably limits the performance of the obtained algorithms. So by replacing the fixed prediction scenarios with the time-varying multi-step prediction scenarios, this paper provides a new insight into improvement of the existing MPC designs. The adopted control law is a combination of predetermined multi-step feedback control laws, based on which two MPC algorithms with guaranteed recursive feasibility and asymptotic stability are presented. The efficacy of the proposed algorithms is illustrated by a numerical example.

  19. Targeting dopa-sensitive and dopa-resistant gait dysfunction in Parkinson's disease: selective responses to internal and external cues.

    PubMed

    Rochester, Lynn; Baker, Katherine; Nieuwboer, Alice; Burn, David

    2011-02-15

    Independence of certain gait characteristics from dopamine replacement therapies highlights its complex pathophysiology in Parkinson's disease (PD). We explored the effect of two different cue strategies on gait characteristics in relation to their response to dopaminergic medications. Fifty people with PD (age 69.22 ± 6.6 years) were studied. Participants walked with and without cues presented in a randomized order. Cue strategies were: (1) internal cue (attention to increase step length) and (2) external cue (auditory cue with instruction to take large step to the beat). Testing was carried out two times at home (on and off medication). Gait was measured using a Stride Analyzer (B&L Engineering). Gait outcomes were walking speed, stride length, step frequency, and coefficient of variation (CV) of stride time and double limb support duration (DLS). Walking speed, stride length, and stride time CV improved on dopaminergic medications, whereas step frequency and DLS CV did not. Internal and external cues increased stride time and walking speed (on and off dopaminergic medications). Only the external cue significantly improved stride time CV and DLS CV, whereas the internal cue had no effect (on and off dopaminergic medications). Internal and external cues selectively modify gait characteristics in relation to the type of gait disturbance and its dopa-responsiveness. Although internal (attention) and external cues target dopaminergic gait dysfunction (stride length), only external cues target stride to stride fluctuations in gait. Despite an overlap with dopaminergic pathways, external cues may effectively address nondopaminergic gait dysfunction and potentially increase mobility and reduce gait instability and falls. Copyright © 2010 Movement Disorder Society.

  20. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  1. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  2. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanen, Michel; Marin, Oana; Zhang, Hong

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less

  3. Differential exocytosis from human endothelial cells evoked by high intracellular Ca2+ concentration

    PubMed Central

    Zupančič, G; Ogden, D; Magnus, C J; Wheeler-Jones, C; Carter, T D

    2002-01-01

    Endothelial cells secrete a range of procoagulant, anticoagulant and inflammatory proteins by exocytosis to regulate blood clotting and local immune responses. The mechanisms regulating vesicular exocytosis were studied in human umbilical vein endothelial cells (HUVEC) with high-resolution membrane capacitance (Cm) measurements. The total whole-cell Cm and the amplitudes and times of discrete femtoFarad (fF)-sized Cm steps due to exocytosis and endocytosis were monitored simultaneously. Intracellular calcium concentration [Ca2+]i was elevated by intracellular photolysis of calcium-DM-nitrophen to evoke secretion and monitored with the low-affinity Ca2+ indicator furaptra. Sustained elevation of [Ca2+]i to > 20 μm evoked large, slow increases in Cm of up to 5 pF in 1-2 min. Exocytotic and endocytotic steps of amplitude 0.5-110 fF were resolved, and accounted on average for ≈33 % of the total Cm change. A prominent component of Cm steps of 2.5-9.0 fF was seen and could be attributed to exocytosis of von-Willebrand-factor-containing Weibel-Palade bodies (WPb), based on the near-identical distributions of capacitance step amplitudes, with calculated estimates of WPb capacitance from morphometry, and on the absence of 2.5-9.0 fF Cm steps in cells deficient in WPb. WPb secretion was delayed on average by 23 s after [Ca2+]i elevation, whereas total Cm increased immediately due to the secretion of small, non-WPb granules. The results show that following a large increase of [Ca2+]i, corresponding to strong stimulation, small vesicular components are immediately available for secretion, whereas the large WPb undergo exocytosis only after a delay. The presence of events of magnitude 9-110 fF also provides evidence of compound secretion of WPb due to prior fusion of individual granules. PMID:12411520

  4. Regional hydrologic response of loblolly pine to air temperature and precipitation changes

    Treesearch

    Steven G. McNulty; James M. Vose; Wayne T. Swank

    1997-01-01

    Large deviations in average annual air temperatures and total annual precipitation were observed across the Southern United States during the last 50 years, and these fluctuations could become even larger during the next century. The authors used PnET-IIS, a monthly time-step forest process model that uses soil, vegetation, and climate inputs to assess the influence of...

  5. What's Your Game Plan?: Developing Library Games Can Help Students Master Information Skills

    ERIC Educational Resources Information Center

    Siderius, Jennifer A.

    2011-01-01

    Stepping into a school library today reveals the dramatic changes in educational games since the author's elementary school days. Many current school libraries now boast computer- and video-based games, as well as geocaching, big games, or large-scale scavenger hunts that pit teams against each other in timed races to find clues about a…

  6. The Use of Novel Camtasia Videos to Improve Performance of At-Risk Students in Undergraduate Physiology Courses

    ERIC Educational Resources Information Center

    Miller, Cynthia J.

    2014-01-01

    Students in undergraduate physiology courses often have difficulty understanding complex, multi-step processes, and these concepts consume a large portion of class time. For this pilot study, it was hypothesized that online multimedia resources may improve student performance in a high-risk population and reduce the in-class workload. A narrated…

  7. An efficient, explicit finite-rate algorithm to compute flows in chemical nonequilibrium

    NASA Technical Reports Server (NTRS)

    Palmer, Grant

    1989-01-01

    An explicit finite-rate code was developed to compute hypersonic viscous chemically reacting flows about three-dimensional bodies. Equations describing the finite-rate chemical reactions were fully coupled to the gas dynamic equations using a new coupling technique. The new technique maintains stability in the explicit finite-rate formulation while permitting relatively large global time steps.

  8. An algorithm for deciding the number of clusters and validating using simulated data with application to exploring crop population structure

    USDA-ARS?s Scientific Manuscript database

    A first step in exploring population structure in crop plants and other organisms is to define the number of subpopulations that exist for a given data set. The genetic marker data sets being generated have become increasingly large over time and commonly are the high-dimension, low sample size (HDL...

  9. Hyperbolic conservation laws and numerical methods

    NASA Technical Reports Server (NTRS)

    Leveque, Randall J.

    1990-01-01

    The mathematical structure of hyperbolic systems and the scalar equation case of conservation laws are discussed. Linear, nonlinear systems and the Riemann problem for the Euler equations are also studied. The numerical methods for conservation laws are presented in a nonstandard manner which leads to large time steps generalizations and computations on irregular grids. The solution of conservation laws with stiff source terms is examined.

  10. Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas

    2017-04-01

    Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.

  11. Practice Makes Perfect?: Effective Practice Instruction in Large Ensembles

    ERIC Educational Resources Information Center

    Prichard, Stephanie

    2012-01-01

    Helping young musicians learn how to practice effectively is a challenge faced by all music educators. This article presents a system of individual music practice instruction that can be seamlessly integrated within large-ensemble rehearsals. Using a step-by-step approach, large-ensemble conductors can teach students to identify and isolate…

  12. Possibility of the real-time dynamic strain field monitoring deduced from GNSS data: case study of the 2016 Kumamoto earthquake sequence

    NASA Astrophysics Data System (ADS)

    Ohta, Y.; Ohzono, M.; Takahashi, H.; Kawamoto, S.; Hino, R.

    2017-12-01

    A large and destructive earthquake (Mjma 7.3) occurred on April 15, 2016 in Kumamoto region, southwestern Japan. This earthquake was accompanied approximately 32 s later by an M 6 earthquake in central Oita region, which hypocenter located 80 km northeast from the hypocenter of the mainshock of the Kumamoto earthquake. This triggered earthquake also had the many aftershocks in and around the Oita region. It is important to understand how to occur such chain-reacted earthquake sequences. We used the 1Hz dual-frequency phase and range data from GEONET in Kyushu island. The data were processed using GIPSY-OASIS (version 6.4). We adopoted kinematic PPP strategy for the coordinate estimation. The reference GPS satellite orbit and 5 s clock information were obtained using the CODE product. We also applied simple sidereal filter technique for the estimated time series. Based on the obtained 1Hz GNSS time series, we estimated the areal strain and principle strain field using the method of the Shen et al. (1996). For the assessment of the dynamic strain, firstly we calculated the averaged absolute value of areal strain field between 60-85s after the origin time of the mainshock of the Kumamoto earthquake which was used as the "reference" static strain field. Secondly, we estimated the absolute value of areal strain in each time step. Finally, we calculated the strain ratio in each time step relative to the "reference". Based on this procedure, we can extract the spatial and temporal characteristic of the dynamic strain in each time step. Extracted strain ratio clearly shows the spatial and temporal dynamic strain characteristic. When an attention is paid to a region of triggered Oita earthquake, the timing of maximum dynamic strain ratio in the epicenter just corresponds to the origin time of the triggered event. It strongly suggested that the large dynamic strain may trigger the Oita event. The epicenter of the triggered earthquake located within the geothermal region. In the geothermal region, the crustal materials are more sensitive to stress perturbations, and the earthquakes are more easily triggered compared with other typical regions. Our result also suggested that the real-time strain field monitoring may be useful information for the understanding of the possibility of the remotely triggered earthquake in the future.

  13. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction.

    PubMed

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-04-10

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

  14. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction

    PubMed Central

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-01-01

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270

  15. Shear-rate dependence of the viscosity of the Lennard-Jones liquid at the triple point

    NASA Astrophysics Data System (ADS)

    Ferrario, M.; Ciccotti, G.; Holian, B. L.; Ryckaert, J. P.

    1991-11-01

    High-precision molecular-dynamics (MD) data are reported for the shear viscosity η of the Lennard-Jones liquid at its triple point, as a function of the shear rate ɛ˙ for a large system (N=2048). The Green-Kubo (GK) value η(ɛ˙=0)=3.24+/-0.04 is estimated from a run of 3.6×106 steps (40 nsec). We find no numerical evidence of a t-3/2 long-time tail for the GK integrand (stress-stress time-correlation function). From our nonequilibrium MD results, obtained both at small and large values of ɛ˙, a consistent picture emerges that supports an analytical (quadratic at low shear rate) dependence of the viscosity on ɛ˙.

  16. Method of Simulating Flow-Through Area of a Pressure Regulator

    NASA Technical Reports Server (NTRS)

    Hass, Neal E. (Inventor); Schallhorn, Paul A. (Inventor)

    2011-01-01

    The flow-through area of a pressure regulator positioned in a branch of a simulated fluid flow network is generated. A target pressure is defined downstream of the pressure regulator. A projected flow-through area is generated as a non-linear function of (i) target pressure, (ii) flow-through area of the pressure regulator for a current time step and a previous time step, and (iii) pressure at the downstream location for the current time step and previous time step. A simulated flow-through area for the next time step is generated as a sum of (i) flow-through area for the current time step, and (ii) a difference between the projected flow-through area and the flow-through area for the current time step multiplied by a user-defined rate control parameter. These steps are repeated for a sequence of time steps until the pressure at the downstream location is approximately equal to the target pressure.

  17. Visualizing multiattribute Web transactions using a freeze technique

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Cotting, Daniel; Dayal, Umeshwar; Machiraju, Vijay; Garg, Pankaj

    2003-05-01

    Web transactions are multidimensional and have a number of attributes: client, URL, response times, and numbers of messages. One of the key questions is how to simultaneously lay out in a graph the multiple relationships, such as the relationships between the web client response times and URLs in a web access application. In this paper, we describe a freeze technique to enhance a physics-based visualization system for web transactions. The idea is to freeze one set of objects before laying out the next set of objects during the construction of the graph. As a result, we substantially reduce the force computation time. This technique consists of three steps: automated classification, a freeze operation, and a graph layout. These three steps are iterated until the final graph is generated. This iterated-freeze technique has been prototyped in several e-service applications at Hewlett Packard Laboratories. It has been used to visually analyze large volumes of service and sales transactions at online web sites.

  18. The tropopause cold trap in the Australian Monsoon during STEP/AMEX 1987

    NASA Technical Reports Server (NTRS)

    Selkirk, Henry B.

    1993-01-01

    The relationship between deep convection and tropopause cold trap conditions is examined for the tropical northern Australia region during the 1986-87 summer monsoon season, emphasizing the Australia Monsoon Experiment (AMEX) period when the NASA Stratosphere-Troposphere Exchange Project (STEP) was being conducted. The factors related to the spatial and temporal variability of the cold point potential temperature (CPPT) are investigated. A framework is developed for describing the relationships among surface average equivalent potential temperature in the surface layer (AEPTSL) the height of deep convection, and stratosphere-troposphere exchange. The time-mean pattern of convection, large-scale circulation, and surface AEPTSL in the Australian monsoon and the evolution of the convective environment during the monsoon period and the extended transition season which preceded it are described. The time-mean fields of cold point level variables are examined and the statistical relationships between mean CPPT, surface AEPTSL, and deep convection are described. Day-to-day variations of CPPT are examined in terms of these time mean relationships.

  19. Reactive molecular dynamics simulation of solid nitromethane impact on (010) surfaces induced and nonimpact thermal decomposition.

    PubMed

    Guo, Feng; Cheng, Xin-lu; Zhang, Hong

    2012-04-12

    Which is the first step in the decomposition process of nitromethane is a controversial issue, proton dissociation or C-N bond scission. We applied reactive force field (ReaxFF) molecular dynamics to probe the initial decomposition mechanisms of nitromethane. By comparing the impact on (010) surfaces and without impact (only heating) for nitromethane simulations, we found that proton dissociation is the first step of the pyrolysis of nitromethane, and the C-N bond decomposes in the same time scale as in impact simulations, but in the nonimpact simulation, C-N bond dissociation takes place at a later time. At the end of these simulations, a large number of clusters are formed. By analyzing the trajectories, we discussed the role of the hydrogen bond in the initial process of nitromethane decompositions, the intermediates observed in the early time of the simulations, and the formation of clusters that consisted of C-N-C-N chain/ring structures.

  20. Robust numerical solution of the reservoir routing equation

    NASA Astrophysics Data System (ADS)

    Fiorentini, Marcello; Orlandini, Stefano

    2013-09-01

    The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.

  1. Structured Overlapping Grid Simulations of Contra-rotating Open Rotor Noise

    NASA Technical Reports Server (NTRS)

    Housman, Jeffrey A.; Kiris, Cetin C.

    2015-01-01

    Computational simulations using structured overlapping grids with the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for predicting tonal noise generated by a contra-rotating open rotor (CROR) propulsion system. A coupled Computational Fluid Dynamics (CFD) and Computational AeroAcoustics (CAA) numerical approach is applied. Three-dimensional time-accurate hybrid Reynolds Averaged Navier-Stokes/Large Eddy Simulation (RANS/LES) CFD simulations are performed in the inertial frame, including dynamic moving grids, using a higher-order accurate finite difference discretization on structured overlapping grids. A higher-order accurate free-stream preserving metric discretization with discrete enforcement of the Geometric Conservation Law (GCL) on moving curvilinear grids is used to create an accurate, efficient, and stable numerical scheme. The aeroacoustic analysis is based on a permeable surface Ffowcs Williams-Hawkings (FW-H) approach, evaluated in the frequency domain. A time-step sensitivity study was performed using only the forward row of blades to determine an adequate time-step. The numerical approach is validated against existing wind tunnel measurements.

  2. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  3. A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann

    2003-01-01

    A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.

  4. Microfluidic Remote Loading for Rapid Single-Step Liposomal Drug Preparation

    PubMed Central

    Hood, R.R.; Vreeland, W. N.; DeVoe, D.L.

    2014-01-01

    Microfluidic-directed formation of liposomes is combined with in-line sample purification and remote drug loading for single step, continuous-flow synthesis of nanoscale vesicles containing high concentrations of stably loaded drug compounds. Using an on-chip microdialysis element, the system enables rapid formation of large transmembrane pH and ion gradients, followed by immediate introduction of amphipathic drug for real-time remote loading into the liposomes. The microfluidic process enables in-line formation of drug-laden liposomes with drug:lipid molar ratios of up to 1.3, and a total on-chip residence time of approximately 3 min, representing a significant improvement over conventional bulk-scale methods which require hours to days for combined liposome synthesis and remote drug loading. The microfluidic platform may be further optimized to support real-time generation of purified liposomal drug formulations with high concentrations of drugs and minimal reagent waste for effective liposomal drug preparation at or near the point of care. PMID:25003823

  5. Associations of office workers' objectively assessed occupational sitting, standing and stepping time with musculoskeletal symptoms.

    PubMed

    Coenen, Pieter; Healy, Genevieve N; Winkler, Elisabeth A H; Dunstan, David W; Owen, Neville; Moodie, Marj; LaMontagne, Anthony D; Eakin, Elizabeth A; O'Sullivan, Peter B; Straker, Leon M

    2018-04-22

    We examined the association of musculoskeletal symptoms (MSS) with workplace sitting, standing and stepping time, as well as sitting and standing time accumulation (i.e. usual bout duration of these activities), measured objectively with the activPAL3 monitor. Using baseline data from the Stand Up Victoria trial (216 office workers, 14 workplaces), cross-sectional associations of occupational activities with self-reported MSS (low-back, upper and lower extremity symptoms in the last three months) were examined using probit regression, correcting for clustering and adjusting for confounders. Sitting bout duration was significantly (p < 0.05) associated, non-linearly, with MSS, such that those in the middle tertile displayed the highest prevalence of upper extremity symptoms. Other associations were non-significant but sometimes involved large differences in symptom prevalence (e.g. 38%) by activity. Though causation is unclear, these non-linear associations suggest that sitting and its alternatives (i.e. standing and stepping) interact with MSS and this should be considered when designing safe work systems. Practitioner summary: We studied associations of objectively assessed occupational activities with musculoskeletal symptoms in office workers. Workers who accumulated longer sitting bouts reported fewer upper extremity symptoms. Total activity duration was not significantly associated with musculoskeletal symptoms. We underline the importance of considering total volumes and patterns of activity time in musculoskeletal research.

  6. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    NASA Astrophysics Data System (ADS)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  7. Measuring masses of large biomolecules and bioparticles using mass spectrometric techniques.

    PubMed

    Peng, Wen-Ping; Chou, Szu-Wei; Patil, Avinash A

    2014-07-21

    Large biomolecules and bioparticles play a vital role in biology, chemistry, biomedical science and physics. Mass is a critical parameter for the characterization of large biomolecules and bioparticles. To achieve mass analysis, choosing a suitable ion source is the first step and the instruments for detecting ions, mass analyzers and detectors should also be considered. Abundant mass spectrometric techniques have been proposed to determine the masses of large biomolecules and bioparticles and these techniques can be divided into two categories. The first category measures the mass (or size) of intact particles, including single particle quadrupole ion trap mass spectrometry, cell mass spectrometry, charge detection mass spectrometry and differential mobility mass analysis; the second category aims to measure the mass and tandem mass of biomolecular ions, including quadrupole ion trap mass spectrometry, time-of-flight mass spectrometry, quadrupole orthogonal time-of-flight mass spectrometry and orbitrap mass spectrometry. Moreover, algorithms for the mass and stoichiometry assignment of electrospray mass spectra are developed to obtain accurate structure information and subunit combinations.

  8. SWAP-Assembler 2: Optimization of De Novo Genome Assembler at Large Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jintao; Seo, Sangmin; Balaji, Pavan

    2016-08-16

    In this paper, we analyze and optimize the most time-consuming steps of the SWAP-Assembler, a parallel genome assembler, so that it can scale to a large number of cores for huge genomes with the size of sequencing data ranging from terabyes to petabytes. According to the performance analysis results, the most time-consuming steps are input parallelization, k-mer graph construction, and graph simplification (edge merging). For the input parallelization, the input data is divided into virtual fragments with nearly equal size, and the start position and end position of each fragment are automatically separated at the beginning of the reads. Inmore » k-mer graph construction, in order to improve the communication efficiency, the message size is kept constant between any two processes by proportionally increasing the number of nucleotides to the number of processes in the input parallelization step for each round. The memory usage is also decreased because only a small part of the input data is processed in each round. With graph simplification, the communication protocol reduces the number of communication loops from four to two loops and decreases the idle communication time. The optimized assembler is denoted as SWAP-Assembler 2 (SWAP2). In our experiments using a 1000 Genomes project dataset of 4 terabytes (the largest dataset ever used for assembling) on the supercomputer Mira, the results show that SWAP2 scales to 131,072 cores with an efficiency of 40%. We also compared our work with both the HipMER assembler and the SWAP-Assembler. On the Yanhuang dataset of 300 gigabytes, SWAP2 shows a 3X speedup and 4X better scalability compared with the HipMer assembler and is 45 times faster than the SWAP-Assembler. The SWAP2 software is available at https://sourceforge.net/projects/swapassembler.« less

  9. Accurate seismic phase identification and arrival time picking of glacial icequakes

    NASA Astrophysics Data System (ADS)

    Jones, G. A.; Doyle, S. H.; Dow, C.; Kulessa, B.; Hubbard, A.

    2010-12-01

    A catastrophic lake drainage event was monitored continuously using an array of 6, 4.5 Hz 3 component geophones in the Russell Glacier catchment, Western Greenland. Many thousands of events and arrival time phases (e.g., P- or S-wave) were recorded, often with events occurring simultaneously but at different locations. In addition, different styles of seismic events were identified from 'classical' tectonic earthquakes to tremors usually observed in volcanic regions. The presence of such a diverse and large dataset provides insight into the complex system of lake drainage. One of the most fundamental steps in seismology is the accurate identification of a seismic event and its associated arrival times. However, the collection of such a large and complex dataset makes the manual identification of a seismic event and picking of the arrival time phases time consuming with variable results. To overcome the issues of consistency and manpower, a number of different methods have been developed including short-term and long-term averages, spectrograms, wavelets, polarisation analyses, higher order statistics and auto-regressive techniques. Here we propose an automated procedure which establishes the phase type and accurately determines the arrival times. The procedure combines a number of different automated methods to achieve this, and is applied to the recently acquired lake drainage data. Accurate identification of events and their arrival time phases are the first steps in gaining a greater understanding of the extent of the deformation and the mechanism of such drainage events. A good knowledge of the propagation pathway of lake drainage meltwater through a glacier will have significant consequences for interpretation of glacial and ice sheet dynamics.

  10. Nanostructuring of sapphire using time-modulated nanosecond laser pulses

    NASA Astrophysics Data System (ADS)

    Lorenz, P.; Zagoranskiy, I.; Ehrhardt, M.; Bayer, L.; Zimmer, K.

    2017-02-01

    The nanostructuring of dielectric surfaces using laser radiation is still a challenge. The IPSM-LIFE (laser-induced front side etching using in-situ pre-structured metal layer) method allows the easy, large area and fast laser nanostructuring of dielectrics. At IPSM-LIFE a metal covered dielectric is irradiated where the structuring is assisted by a self-organized molten metal layer deformation process. The IPSM-LIFE can be divided into two steps: STEP 1: The irradiation of thin metal layers on dielectric surfaces results in a melting and nanostructuring process of the metal layer and partially of the dielectric surface. STEP 2: A subsequent high laser fluence treatment of the metal nanostructures result in a structuring of the dielectric surface. At this study a sapphire substrate Al2O3(1-102) was covered with a 10 nm thin molybdenum layer and irradiated by an infrared laser with an adjustable time-dependent pulse form with a time resolution of 1 ns (wavelength λ = 1064 nm, pulse duration Δtp = 1 - 600 ns, Gaussian beam profile). The laser treatment allows the fabrication of different surface structures into the sapphire surface due to a pattern transfer process. The resultant structures were investigated by scanning electron microscopy (SEM). The process was simulated and the simulation results were compared with experimental results.

  11. Optimal pre-scheduling of problem remappings

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1987-01-01

    A large class of scientific computational problems can be characterized as a sequence of steps where a significant amount of computation occurs each step, but the work performed at each step is not necessarily identical. Two good examples of this type of computation are: (1) regridding methods which change the problem discretization during the course of the computation, and (2) methods for solving sparse triangular systems of linear equations. Recent work has investigated a means of mapping such computations onto parallel processors; the method defines a family of static mappings with differing degrees of importance placed on the conflicting goals of good load balance and low communication/synchronization overhead. The performance tradeoffs are controllable by adjusting the parameters of the mapping method. To achieve good performance it may be necessary to dynamically change these parameters at run-time, but such changes can impose additional costs. If the computation's behavior can be determined prior to its execution, it can be possible to construct an optimal parameter schedule using a low-order-polynomial-time dynamic programming algorithm. Since the latter can be expensive, the performance is studied of the effect of a linear-time scheduling heuristic on one of the model problems, and it is shown to be effective and nearly optimal.

  12. On the performance of updating Stochastic Dynamic Programming policy using Ensemble Streamflow Prediction in a snow-covered region

    NASA Astrophysics Data System (ADS)

    Martin, A.; Pascal, C.; Leconte, R.

    2014-12-01

    Stochastic Dynamic Programming (SDP) is known to be an effective technique to find the optimal operating policy of hydropower systems. In order to improve the performance of SDP, this project evaluates the impact of re-updating the policy at every time step by using Ensemble Streamflow Prediction (ESP). We present a case study of the Kemano's hydropower system on the Nechako River in British Columbia, Canada. Managed by Rio Tinto Alcan (RTA), this system is subject to large streamflow volumes in spring due to important amount of snow depth during the winter season. Therefore, the operating policy should not only maximize production but also minimize the risk of flooding. The hydrological behavior of the system is simulated with CEQUEAU, a distributed and deterministic hydrological model developed by the Institut national de la recherche scientifique - Eau, Terre et Environnement (INRS-ETE) in Quebec, Canada. On each decision time step, CEQUEAU is used to generate ESP scenarios based on historical meteorological sequences and the current state of the hydrological model. These scenarios are used into the SDP to optimize the new release policy for the next time steps. This routine is then repeated over the entire simulation period. Results are compared with those obtained by using SDP on historical inflow scenarios.

  13. Methods, systems and devices for detecting and locating ferromagnetic objects

    DOEpatents

    Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID

    2010-01-26

    Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.

  14. Hurdles to Overcome to Model Carrington Class Events

    NASA Astrophysics Data System (ADS)

    Engel, M.; Henderson, M. G.; Jordanova, V. K.; Morley, S.

    2017-12-01

    Large geomagnetic storms pose a threat to both space and ground based infrastructure. In order to help mitigate that threat a better understanding of the specifics of these storms is required. Various computer models are being used around the world to analyze the magnetospheric environment, however they are largely inadequate for analyzing the large and extreme storm time environments. Here we report on the first steps towards expanding and robustifying the RAM-SCB inner magnetospheric model, used in conjunction with BATS-R-US and the Space Weather Modeling Framework, in order to simulate storms with Dst > -400. These results will then be used to help expand our modelling capabilities towards including Carrington-class events.

  15. Mapping land cover through time with the Rapid Land Cover Mapper—Documentation and user manual

    USGS Publications Warehouse

    Cotillon, Suzanne E.; Mathis, Melissa L.

    2017-02-15

    The Rapid Land Cover Mapper is an Esri ArcGIS® Desktop add-in, which was created as an alternative to automated or semiautomated mapping methods. Based on a manual photo interpretation technique, the tool facilitates mapping over large areas and through time, and produces time-series raster maps and associated statistics that characterize the changing landscapes. The Rapid Land Cover Mapper add-in can be used with any imagery source to map various themes (for instance, land cover, soils, or forest) at any chosen mapping resolution. The user manual contains all essential information for the user to make full use of the Rapid Land Cover Mapper add-in. This manual includes a description of the add-in functions and capabilities, and step-by-step procedures for using the add-in. The Rapid Land Cover Mapper add-in was successfully used by the U.S. Geological Survey West Africa Land Use Dynamics team to accurately map land use and land cover in 17 West African countries through time (1975, 2000, and 2013).

  16. Capturing Pressure Oscillations in Numerical Simulations of Internal Combustion Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gubba, Sreenivasa Rao; Jupudi, Ravichandra S.; Pasunurthi, Shyam Sundar

    In an earlier publication, the authors compared numerical predictions of the mean cylinder pressure of diesel and dual-fuel combustion, to that of measured pressure data from a medium-speed, large-bore engine. In these earlier comparisons, measured data from a flush-mounted in-cylinder pressure transducer showed notable and repeatable pressure oscillations which were not evident in the mean cylinder pressure predictions from computational fluid dynamics (CFD). In this paper, the authors present a methodology for predicting and reporting the local cylinder pressure consistent with that of a measurement location. Such predictions for large-bore, medium-speed engine operation demonstrate pressure oscillations in accordance with thosemore » measured. The temporal occurrences of notable pressure oscillations were during the start of combustion and around the time of maximum cylinder pressure. With appropriate resolutions in time steps and mesh sizes, the local cell static pressure predicted for the transducer location showed oscillations in both diesel and dual-fuel combustion modes which agreed with those observed in the experimental data. Fast Fourier transform (FFT) analysis on both experimental and calculated pressure traces revealed that the CFD predictions successfully captured both the amplitude and frequency range of the oscillations. Furthermore, resolving propagating pressure waves with the smaller time steps and grid sizes necessary to achieve these results required a significant increase in computer resources.« less

  17. Quantitative measurement of vitamin K2 (menaquinones) in various fermented dairy products using a reliable high-performance liquid chromatography method.

    PubMed

    Manoury, E; Jourdon, K; Boyaval, P; Fourcassié, P

    2013-03-01

    We evaluated menaquinone contents in a large set of 62 fermented dairy products samples by using a new liquid chromatography method for accurate quantification of lipo-soluble vitamin K(2), including distribution of individual menaquinones. The method used a simple and rapid purification step to remove matrix components in various fermented dairy products 3 times faster than a reference preparation step. Moreover, the chromatography elution time was significantly shortened and resolution and efficiency were optimized. We observed wide diversity of vitamin K(2) contents in the set of fermented dairy products, from undetectable to 1,100 ng/g of product, and a remarkable diversity of menaquinone forms among products. These observations relate to the main microorganism species currently in the different fermented product technologies. The major form in this large set of fermented dairy products was menaquinone (MK)-9, and contents of MK-9 and MK-8 forms were correlated, that of MK-9 being around 4 times that of MK-8, suggesting that microorganisms able to produce MK-9 also produce MK-8. This was not the case for the other menaquinones, which were produced independently of each other. Finally, no obvious link was established between MK-9 content and fat content or pH of the fermented dairy products. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, K; Seymour, R; Wang, W

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less

  19. Capturing Pressure Oscillations in Numerical Simulations of Internal Combustion Engines

    DOE PAGES

    Gubba, Sreenivasa Rao; Jupudi, Ravichandra S.; Pasunurthi, Shyam Sundar; ...

    2018-04-09

    In an earlier publication, the authors compared numerical predictions of the mean cylinder pressure of diesel and dual-fuel combustion, to that of measured pressure data from a medium-speed, large-bore engine. In these earlier comparisons, measured data from a flush-mounted in-cylinder pressure transducer showed notable and repeatable pressure oscillations which were not evident in the mean cylinder pressure predictions from computational fluid dynamics (CFD). In this paper, the authors present a methodology for predicting and reporting the local cylinder pressure consistent with that of a measurement location. Such predictions for large-bore, medium-speed engine operation demonstrate pressure oscillations in accordance with thosemore » measured. The temporal occurrences of notable pressure oscillations were during the start of combustion and around the time of maximum cylinder pressure. With appropriate resolutions in time steps and mesh sizes, the local cell static pressure predicted for the transducer location showed oscillations in both diesel and dual-fuel combustion modes which agreed with those observed in the experimental data. Fast Fourier transform (FFT) analysis on both experimental and calculated pressure traces revealed that the CFD predictions successfully captured both the amplitude and frequency range of the oscillations. Furthermore, resolving propagating pressure waves with the smaller time steps and grid sizes necessary to achieve these results required a significant increase in computer resources.« less

  20. Data Reorganization for Optimal Time Series Data Access, Analysis, and Visualization

    NASA Astrophysics Data System (ADS)

    Rui, H.; Teng, W. L.; Strub, R.; Vollmer, B.

    2012-12-01

    The way data are archived is often not optimal for their access by many user communities (e.g., hydrological), particularly if the data volumes and/or number of data files are large. The number of data records of a non-static data set generally increases with time. Therefore, most data sets are commonly archived by time steps, one step per file, often containing multiple variables. However, many research and application efforts need time series data for a given geographical location or area, i.e., a data organization that is orthogonal to the way the data are archived. The retrieval of a time series of the entire temporal coverage of a data set for a single variable at a single data point, in an optimal way, is an important and longstanding challenge, especially for large science data sets (i.e., with volumes greater than 100 GB). Two examples of such large data sets are the North American Land Data Assimilation System (NLDAS) and Global Land Data Assimilation System (GLDAS), archived at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC; Hydrology Data Holdings Portal, http://disc.sci.gsfc.nasa.gov/hydrology/data-holdings). To date, the NLDAS data set, hourly 0.125x0.125° from Jan. 1, 1979 to present, has a total volume greater than 3 TB (compressed). The GLDAS data set, 3-hourly and monthly 0.25x0.25° and 1.0x1.0° Jan. 1948 to present, has a total volume greater than 1 TB (compressed). Both data sets are accessible, in the archived time step format, via several convenient methods, including Mirador search and download (http://mirador.gsfc.nasa.gov/), GrADS Data Server (GDS; http://hydro1.sci.gsfc.nasa.gov/dods/), direct FTP (ftp://hydro1.sci.gsfc.nasa.gov/data/s4pa/), and Giovanni Online Visualization and Analysis (http://disc.sci.gsfc.nasa.gov/giovanni). However, users who need long time series currently have no efficient way to retrieve them. Continuing a longstanding tradition of facilitating data access, analysis, and visualization that contribute to knowledge discovery from large science data sets, the GES DISC recently begun a NASA ACCESS-funded project to, in part, optimally reorganize selected large data sets for access and use by the hydrological user community. This presentation discusses the following aspects of the project: (1) explorations of approaches, such as database and file system; (2) findings for each approach, such as limitations and concerns, and pros and cons; (3) implementation of reorganizing data via the file system approach, including data processing (parameter and spatial subsetting), metadata and file structure of reorganized time series data (true "Data Rod," single variable, single grid point, and entire data range per file), and production and quality control. The reorganized time series data will be integrated into several broadly used data tools, such as NASA Giovanni and those provided by CUAHSI HIS (http://his.cuahsi.org/) and EPA BASINS (http://water.epa.gov/scitech/datait/models/basins/), as well as accessible via direct FTP, along with documentation and sample reading software. The data reorganization is initially, as part of the project, applied to selected popular hydrology-related parameters, with other parameters to be added, as resources permit.

  1. Unsteady Analysis of Separated Aerodynamic Flows Using an Unstructured Multigrid Algorithm

    NASA Technical Reports Server (NTRS)

    Pelaez, Juan; Mavriplis, Dimitri J.; Kandil, Osama

    2001-01-01

    An implicit method for the computation of unsteady flows on unstructured grids is presented. The resulting nonlinear system of equations is solved at each time step using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Validation of the code using a one-equation turbulence model is performed for the well-known case of flow over a cylinder. A Detached Eddy Simulation model is also implemented and its performance compared to the one equation Spalart-Allmaras Reynolds Averaged Navier-Stokes (RANS) turbulence model. Validation cases using DES and RANS include flow over a sphere and flow over a NACA 0012 wing including massive stall regimes. The project was driven by the ultimate goal of computing separated flows of aerodynamic interest, such as massive stall or flows over complex non-streamlined geometries.

  2. Application of symbolic/numeric matrix solution techniques to the NASTRAN program

    NASA Technical Reports Server (NTRS)

    Buturla, E. M.; Burroughs, S. H.

    1977-01-01

    The matrix solving algorithm of any finite element algorithm is extremely important since solution of the matrix equations requires a large amount of elapse time due to null calculations and excessive input/output operations. An alternate method of solving the matrix equations is presented. A symbolic processing step followed by numeric solution yields the solution very rapidly and is especially useful for nonlinear problems.

  3. On the development of efficient algorithms for three dimensional fluid flow

    NASA Technical Reports Server (NTRS)

    Maccormack, R. W.

    1988-01-01

    The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.

  4. Exponential integration algorithms applied to viscoplasticity

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.; Walker, Kevin P.

    1991-01-01

    Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.

  5. Space station, 1959 to . .

    NASA Astrophysics Data System (ADS)

    Butler, G. V.

    1981-04-01

    Early space station designs are considered, taking into account Herman Oberth's first space station, the London Daily Mail Study, the first major space station design developed during the moon mission, and the Manned Orbiting Laboratory Program of DOD. Attention is given to Skylab, new space station studies, the Shuttle and Spacelab, communication satellites, solar power satellites, a 30 meter diameter radiometer for geological measurements and agricultural assessments, the mining of the moons, and questions of international cooperation. It is thought to be very probable that there will be very large space stations at some time in the future. However, for the more immediate future a step-by-step development that will start with Spacelab stations of 3-4 men is envisaged.

  6. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  7. Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations

    USGS Publications Warehouse

    Casulli, V.; Cheng, R.T.

    1990-01-01

    In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.

  8. Scalable asynchronous execution of cellular automata

    NASA Astrophysics Data System (ADS)

    Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo

    2016-10-01

    The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.

  9. A two-step database search method improves sensitivity in peptide sequence matches for metaproteomics and proteogenomics studies.

    PubMed

    Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J

    2013-04-01

    Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Deciding to Decide: How Decisions Are Made and How Some Forces Affect the Process.

    PubMed

    McConnell, Charles R

    There is a decision-making pattern that applies in all situations, large or small, although in small decisions, the steps are not especially evident. The steps are gathering information, analyzing information and creating alternatives, selecting and implementing an alternative, and following up on implementation. The amount of effort applied in any decision situation should be consistent with the potential consequences of the decision. Essentially, all decisions are subject to certain limitations or constraints, forces, or circumstances that limit one's range of choices. Follow-up on implementation is the phase of decision making most often neglected, yet it is frequently the phase that determines success or failure. Risk and uncertainty are always present in a decision situation, and the application of human judgment is always necessary. In addition, there are often emotional forces at work that can at times unwittingly steer one away from that which is best or most workable under the circumstances and toward a suboptimal result based largely on the desires of the decision maker.

  11. A Low Cost Approach to the Design of Autopilot for Hypersonic Glider

    NASA Astrophysics Data System (ADS)

    Liang, Wang; Weihua, Zhang; Ke, Peng; Donghui, Wang

    2017-12-01

    This paper proposes a novel integrated guidance and control (IGC) approach to improve the autopilot design with low cost for hypersonic glider in dive and pull-up phase. The main objective is robust and adaptive tracking of flight path angle (FPA) under severe flight scenarios. Firstly, the nonlinear IGC model is developed with a second order actuator dynamics. Then the adaptive command filtered back-stepping control is implemented to deal with the large aerodynamics coefficient uncertainties, control surface uncertainties and unmatched time-varying disturbances. For the autopilot, a back-stepping sliding mode control is designed to track the control surface deflection, and a nonlinear differentiator is used to avoid direct differentiating the control input. Through a series of 6-DOF numerical simulations, it’s shown that the proposed scheme successfully cancels out the large uncertainties and disturbances in tracking different kinds of FPA trajectory. The contribution of this paper lies in the application and determination of nonlinear integrated design of guidance and control system for hypersonic glider.

  12. An easy-to-use calculating machine to simulate steady state and non-steady-state preparative separations by multiple dual mode counter-current chromatography with semi-continuous loading of feed mixtures.

    PubMed

    Kostanyan, Artak E; Shishilov, Oleg N

    2018-06-01

    Multiple dual mode counter-current chromatography (MDM CCC) separation processes with semi-continuous large sample loading consist of a succession of two counter-current steps: with "x" phase (first step) and "y" phase (second step) flow periods. A feed mixture dissolved in the "x" phase is continuously loaded into a CCC machine at the beginning of the first step of each cycle over a constant time with the volumetric rate equal to the flow rate of the pure "x" phase. An easy-to-use calculating machine is developed to simulate the chromatograms and the amounts of solutes eluted with the phases at each cycle for steady-state (the duration of the flow periods of the phases is kept constant for all the cycles) and non-steady-state (with variable duration of alternating phase elution steps) separations. Using the calculating machine, the separation of mixtures containing up to five components can be simulated and designed. Examples of the application of the calculating machine for the simulation of MDM CCC processes are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    PubMed

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  14. Numerical solution of the incompressible Navier-Stokes equations. Ph.D. Thesis - Stanford Univ., Mar. 1989

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1990-01-01

    The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.

  15. Extraction of hyaluronic acid (HA) from rooster comb and characterization using flow field-flow fractionation (FlFFF) coupled with multiangle light scattering (MALS).

    PubMed

    Kang, Dong Young; Kim, Won-Suk; Heo, In Sook; Park, Young Hun; Lee, Seungho

    2010-11-01

    Hyaluronic acid (HA) was extracted in a relatively large scale from rooster comb using a method similar to that reported previously. The extraction method was modified to simplify and to reduce time and cost in order to accommodate a large-scale extraction. Five hundred grams of frozen rooster combs yielded about 500 mg of dried HA. Extracted HA was characterized using asymmetrical flow field-flow fractionation (AsFlFFF) coupled online to a multiangle light scattering detector and a refractive index detector to determine the molecular size, molecular weight (MW) distribution, and molecular conformation of HA. For characterization of HA, AsFlFFF was operated by a simplified two-step procedure, instead of the conventional three-step procedure, where the first two steps (sample loading and focusing) were combined into one to avoid the adsorption of viscous HA onto the channel membrane. The simplified two-step AsFlFFF yielded reasonably good separations of HA molecules based on their MWs. The weight average MW (M(w) ) and the average root-mean-square (RMS) radius of HA extracted from rooster comb were 1.20×10(6) and 94.7 nm, respectively. When the sample solution was filtered through a 0.45 μm disposable syringe filter, they were reduced down to 3.8×10(5) and 50.1 nm, respectively. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects

    PubMed Central

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.

    2012-01-01

    Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108

  17. StePS: Stereographically Projected Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-05-01

    StePS (Stereographically Projected Cosmological Simulations) compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to simulate the evolution of the large-scale structure. This eliminates the need for periodic boundary conditions, which are a numerical convenience unsupported by observation and which modifies the law of force on large scales in an unrealistic fashion. StePS uses stereographic projection for space compactification and naive O(N2) force calculation; this arrives at a correlation function of the same quality more quickly than standard (tree or P3M) algorithms with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence StePS can function as a high-speed prediction tool for modern large-scale surveys.

  18. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  19. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  20. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  1. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of... to small, medium-size and large water systems. (a) Systems shall complete the applicable corrosion...) or (b)(3) of this section. (2) A small system (serving ≤3300 persons) and a medium-size system...

  2. One-step process of hydrothermal and alkaline treatment of wheat straw for improving the enzymatic saccharification.

    PubMed

    Sun, Shaolong; Zhang, Lidan; Liu, Fang; Fan, Xiaolin; Sun, Run-Cang

    2018-01-01

    To increase the production of bioethanol, a two-step process based on hydrothermal and dilute alkaline treatment was applied to reduce the natural resistance of biomass. However, the process required a large amount of water and a long operation time due to the solid/liquid separation before the alkaline treatment, which led to decrease the pure economic profit for production of bioethanol. Therefore, four one-step processes based on order of hydrothermal and alkaline treatment have been developed to enhance concentration of glucose of wheat straw by enzymatic saccharification. The aim of the present study was to systematically evaluated effect for different one-step processes by analyzing the physicochemical properties (composition, structural change, crystallinity, surface morphology, and BET surface area) and enzymatic saccharification of the treated substrates. In this study, hemicelluloses and lignins were removed from wheat straw and the morphologic structures were destroyed to various extents during the four one-step processes, which were favorable for cellulase absorption on cellulose. A positive correlation was also observed between the crystallinity and enzymatic saccharification rate of the substrate under the conditions given. The surface area of the substrate was positively related to the concentration of glucose in this study. As compared to the control (3.0 g/L) and treated substrates (11.2-14.6 g/L) obtained by the other three one-step processes, the substrate treated by one-step process based on successively hydrothermal and alkaline treatment had a maximum glucose concentration of 18.6 g/L, which was due to the high cellulose concentration and surface area for the substrate, accompanying with removal of large amounts of lignins and hemicelluloses. The present study demonstrated that the order of hydrothermal and alkaline treatment had significant effects on the physicochemical properties and enzymatic saccharification of wheat straw. The one-step process based on successively hydrothermal and alkaline treatment is a simple operating and economical feasible method for the production of glucose, which will be further converted into bioethanol.

  3. Middleware Case Study: MeDICi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, Adam S.

    2011-05-05

    In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less

  4. Why Lévy Foraging does not need to be 'unshackled' from Optimal Foraging Theory. Comment on "Liberating Lévy walk research from the shackles of optimal foraging" by A.M. Reynolds

    NASA Astrophysics Data System (ADS)

    Humphries, Nicolas E.

    2015-09-01

    The comprehensive review of Lévy patterns observed in the moves and pauses of a vast array of organisms by Reynolds [1] makes clear a need to attempt to unify phenomena to understand how organism movement may have evolved. However, I would contend that the research on Lévy 'movement patterns' we detect in time series of animal movements has to a large extent been misunderstood. The statistical techniques, such as Maximum Likelihood Estimation, used to detect these patterns look only at the statistical distribution of move step-lengths and not at the actual pattern, or structure, of the movement path. The path structure is lost altogether when move step-lengths are sorted prior to analysis. Likewise, the simulated movement paths, with step-lengths drawn from a truncated power law distribution in order to test characteristics of the path, such as foraging efficiency, in no way match the actual paths, or trajectories, of real animals. These statistical distributions are, therefore, null models of searching or foraging activity. What has proved surprising about these step-length distributions is the extent to which they improve the efficiency of random searches over simple Brownian motion. It has been shown unequivocally that a power law distribution of move step lengths is more efficient, in terms of prey items located per unit distance travelled, than any other distribution of move step-lengths so far tested (up to 3 times better than Brownian), and over a range of prey field densities spanning more than 4 orders of magnitude [2].

  5. Timing Calibration of the NEMO Optical Sensors

    NASA Astrophysics Data System (ADS)

    Circella, M.; de Marzo, C.; Megna, R.; Ruppi, M.

    2006-04-01

    This paper describes the timing calibration system for the NEMO underwater neutrino telescope. The NEMO Project aims at the construction of a km3 detector, equipped with a large number of photomultipliers, in the Mediterranean Sea. We foresee a redundant system to perform the time calibration of our apparatus: 1) A two-step procedure for measuring the offsets in the time measurements of the NEMO optical sensors, so as to measure separately the time delay for the synchronization signals to reach the offshore electronics and the response time of the photomultipliers to calibration signals delivered from optical pulsers through an optical fibre distribution system; 2) an all-optical procedure for measuring the differences in the time offsets of the different optical modules illuminated by calibration pulses. Such a system can be extended to work for a very large apparatus, even for complex arrangements of widely spaced sensors. The NEMO prototyping activities ongoing at a test site off the coast of Sicily will allow the system described in this work to be operated and tested in situ next year.

  6. Does a microprocessor-controlled prosthetic knee affect stair ascent strategies in persons with transfemoral amputation?

    PubMed

    Aldridge Whitehead, Jennifer M; Wolf, Erik J; Scoville, Charles R; Wilken, Jason M

    2014-10-01

    Stair ascent can be difficult for individuals with transfemoral amputation because of the loss of knee function. Most individuals with transfemoral amputation use either a step-to-step (nonreciprocal, advancing one stair at a time) or skip-step strategy (nonreciprocal, advancing two stairs at a time), rather than a step-over-step (reciprocal) strategy, because step-to-step and skip-step allow the leading intact limb to do the majority of work. A new microprocessor-controlled knee (Ottobock X2(®)) uses flexion/extension resistance to allow step-over-step stair ascent. We compared self-selected stair ascent strategies between conventional and X2(®) prosthetic knees, examined between-limb differences, and differentiated stair ascent mechanics between X2(®) users and individuals without amputation. We also determined which factors are associated with differences in knee position during initial contact and swing within X2(®) users. Fourteen individuals with transfemoral amputation participated in stair ascent sessions while using conventional and X2(®) knees. Ten individuals without amputation also completed a stair ascent session. Lower-extremity stair ascent joint angles, moment, and powers and ground reaction forces were calculated using inverse dynamics during self-selected strategy and cadence and controlled cadence using a step-over-step strategy. One individual with amputation self-selected a step-over-step strategy while using a conventional knee, while 10 individuals self-selected a step-over-step strategy while using X2(®) knees. Individuals with amputation used greater prosthetic knee flexion during initial contact (32.5°, p = 0.003) and swing (68.2°, p = 0.001) with higher intersubject variability while using X2(®) knees compared to conventional knees (initial contact: 1.6°, swing: 6.2°). The increased prosthetic knee flexion while using X2(®) knees normalized knee kinematics to individuals without amputation during swing (88.4°, p = 0.179) but not during initial contact (65.7°, p = 0.002). Prosthetic knee flexion during initial contact and swing were positively correlated with prosthetic limb hip power during pull-up (r = 0.641, p = 0.046) and push-up/early swing (r = 0.993, p < 0.001), respectively. Participants with transfemoral amputation were more likely to self-select a step-over-step strategy similar to individuals without amputation while using X2(®) knees than conventional prostheses. Additionally, the increased prosthetic knee flexion used with X2(®) knees placed large power demands on the hip during pull-up and push-up/early swing. A modified strategy that uses less knee flexion can be used to allow step-over-step ascent in individuals with less hip strength.

  7. A Semi-implicit Method for Time Accurate Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2001-11-01

    A semi-implicit method for time accurate simulation of compressible flow is presented. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity. Centered discretization in both time and space allows the method to achieve zero artificial attenuation of acoustic waves. The method is an extension of the standard low Mach number pressure correction method to the compressible Navier-Stokes equations, and the main feature of the method is the solution of a Helmholtz type pressure correction equation similar to that of Demirdžić et al. (Int. J. Num. Meth. Fluids, Vol. 16, pp. 1029-1050, 1993). The method is attractive for simulation of acoustic combustion instabilities in practical combustors. In these flows, the Mach number is low; therefore the time step allowed by the convective CFL limitation is significantly larger than that allowed by the acoustic CFL limitation, resulting in significant efficiency gains. Also, the method's property of zero artificial attenuation of acoustic waves is important for accurate simulation of the interaction between acoustic waves and the combustion process. The method has been implemented in a large eddy simulation code, and results from several test cases will be presented.

  8. Finite difference time domain calculation of transients in antennas with nonlinear loads

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent

    1991-01-01

    Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.

  9. DYCAST: A finite element program for the crash analysis of structures

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Winter, R.; Ogilvie, P.

    1987-01-01

    DYCAST is a nonlinear structural dynamic finite element computer code developed for crash simulation. The element library contains stringers, beams, membrane skin triangles, plate bending triangles and spring elements. Changing stiffnesses in the structure are accounted for by plasticity and very large deflections. Material nonlinearities are accommodated by one of three options: elastic-perfectly plastic, elastic-linear hardening plastic, or elastic-nonlinear hardening plastic of the Ramberg-Osgood type. Geometric nonlinearities are handled in an updated Lagrangian formulation by reforming the structure into its deformed shape after small time increments while accumulating deformations, strains, and forces. The nonlinearities due to combined loadings are maintained, and stiffness variation due to structural failures are computed. Numerical time integrators available are fixed-step central difference, modified Adams, Newmark-beta, and Wilson-theta. The last three have a variable time step capability, which is controlled internally by a solution convergence error measure. Other features include: multiple time-load history tables to subject the structure to time dependent loading; gravity loading; initial pitch, roll, yaw, and translation of the structural model with respect to the global system; a bandwidth optimizer as a pre-processor; and deformed plots and graphics as post-processors.

  10. Delivery of high intensity beams with large clad step-index fibers for engine ignition

    NASA Astrophysics Data System (ADS)

    Joshi, Sachin; Wilvert, Nick; Yalin, Azer P.

    2012-09-01

    We show, for the first time, that step-index silica fibers with a large clad (400 μm core and 720 μm clad) can be used to transmit nanosecond duration pulses in a way that allows reliable (consistent) spark formation in atmospheric pressure air by the focused output light from the fiber. The high intensity (>100 GW/cm2) of the focused output light is due to the combination of high output power (typical of fibers of this core size) with high output beam quality (better than that typical of fibers of this core size). The high output beam quality, which enables tight focusing, is due to the large clad which suppresses microbending-induced diffusion of modal power to higher order modes owing to the increased rigidity of the core-clad interface. We also show that extending the pulse duration provides a means to increase the delivered pulse energy (>20 mJ delivered for 50 ns pulses) without causing fiber damage. Based on this ability to deliver high energy sparks, we report the first reliable laser ignition of a natural gas engine including startup under typical procedures using silica fiber optics for pulse delivery.

  11. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration

    PubMed Central

    Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui

    2018-01-01

    Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589

  12. Application of novel Modified Biological Aerated Filter (MBAF) as a promising post-treatment for water reuse: Modification in configuration and backwashing process.

    PubMed

    Nikoonahad, Ali; Ghaneian, Mohammad Taghi; Mahvi, Amir Hossein; Ehrampoush, Mohammad Hassan; Ebrahimi, Ali Asghar; Lotfi, Mohammad Hassan; Salamehnejad, Sima

    2017-12-01

    Biological Aerated Filter (BAF) reactors due to their plentiful biomass, high shockability, high efficiency, good filtration, availability and lack of need for large land areas, are enjoying from great importance in advanced wastewater treatment. Therefore, in this study, Polystyrene Coated by Sand (PCS) was produced as a novel media and its application in a modified down-flow BAF structure for advanced wastewater treatment was assessed in two steps. In step one, the backwash effluent did not return to the system, while in step two backwash effluent returned to increase the water reuse efficiency. The backwash process was also studied through three methods of Top Backwashing (TB), Bottom Backwashing (BB), as well as Top and Bottom Backwashing Simultaneously (TBBS). The results showed that return of backwash effluent had no significant effect on the BAF effluent quality. In the second step similar to the first one with slight differences, the residual average concentrations of TSS, BOD 5 , and COD at the effluent were about 2.5, 8.2, and 25.5 mg/L, respectively. Additionally, in step two, the mean volume of disposal sludge/volume of treated water (v ds /v tw ) decreased a large extent to about 0.088%. In other words, the water reuse has increased to more than 99.91%. The backwash time in methods of TB and BB were 65 and 35 min, respectively; however, it decreased in TBBS methods to 25 min. The concentrations of most effluent parameters in this system are in concordance with the 2012 EPA Agriculture Standards, even for irrigation of Non-processed agricultural crops and livestock water consumption. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Transfer effects of step training on stepping performance in untrained directions in older adults: A randomized controlled trial.

    PubMed

    Okubo, Yoshiro; Menant, Jasmine; Udyavar, Manasa; Brodie, Matthew A; Barry, Benjamin K; Lord, Stephen R; L Sturnieks, Daina

    2017-05-01

    Although step training improves the ability of quick stepping, some home-based step training systems train limited stepping directions and may cause harm by reducing stepping performance in untrained directions. This study examines the possible transfer effects of step training on stepping performance in untrained directions in older people. Fifty four older adults were randomized into: forward step training (FT); lateral plus forward step training (FLT); or no training (NT) groups. FT and FLT participants undertook a 15-min training session involving 200 step repetitions. Prior to and post training, choice stepping reaction time and stepping kinematics in untrained, diagonal and lateral directions were assessed. Significant interactions of group and time (pre/post-assessment) were evident for the first step after training indicating negative (delayed response time) and positive (faster peak stepping speed) transfer effects in the diagonal direction in the FT group. However, when the second to the fifth steps after training were included in the analysis, there were no significant interactions of group and time for measures in the diagonal stepping direction. Step training only in the forward direction improved stepping speed but may acutely slow response times in the untrained diagonal direction. However, this acute effect appears to dissipate after a few repeated step trials. Step training in both forward and lateral directions appears to induce no negative transfer effects in diagonal stepping. These findings suggest home-based step training systems present low risk of harm through negative transfer effects in untrained stepping directions. ANZCTR 369066. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Kinetic method for the large-scale analysis of the binding mechanism of histone deacetylase inhibitors.

    PubMed

    Meyners, Christian; Baud, Matthias G J; Fuchter, Matthew J; Meyer-Almes, Franz-Josef

    2014-09-01

    Performing kinetic studies on protein ligand interactions provides important information on complex formation and dissociation. Beside kinetic parameters such as association rates and residence times, kinetic experiments also reveal insights into reaction mechanisms. Exploiting intrinsic tryptophan fluorescence a parallelized high-throughput Förster resonance energy transfer (FRET)-based reporter displacement assay with very low protein consumption was developed to enable the large-scale kinetic characterization of the binding of ligands to recombinant human histone deacetylases (HDACs) and a bacterial histone deacetylase-like amidohydrolase (HDAH) from Bordetella/Alcaligenes. For the binding of trichostatin A (TSA), suberoylanilide hydroxamic acid (SAHA), and two other SAHA derivatives to HDAH, two different modes of action, simple one-step binding and a two-step mechanism comprising initial binding and induced fit, were verified. In contrast to HDAH, all compounds bound to human HDAC1, HDAC6, and HDAC8 through a two-step mechanism. A quantitative view on the inhibitor-HDAC systems revealed two types of interaction, fast binding and slow dissociation. We provide arguments for the thesis that the relationship between quantitative kinetic and mechanistic information and chemical structures of compounds will serve as a valuable tool for drug optimization. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing.

    PubMed

    Einhäuser, Wolfgang; Nuthmann, Antje

    2016-09-01

    During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.

  16. Fast and slow responses of Southern Ocean sea surface temperature to SAM in coupled climate models

    NASA Astrophysics Data System (ADS)

    Kostov, Yavor; Marshall, John; Hausmann, Ute; Armour, Kyle C.; Ferreira, David; Holland, Marika M.

    2017-03-01

    We investigate how sea surface temperatures (SSTs) around Antarctica respond to the Southern Annular Mode (SAM) on multiple timescales. To that end we examine the relationship between SAM and SST within unperturbed preindustrial control simulations of coupled general circulation models (GCMs) included in the Climate Modeling Intercomparison Project phase 5 (CMIP5). We develop a technique to extract the response of the Southern Ocean SST (55°S-70°S) to a hypothetical step increase in the SAM index. We demonstrate that in many GCMs, the expected SST step response function is nonmonotonic in time. Following a shift to a positive SAM anomaly, an initial cooling regime can transition into surface warming around Antarctica. However, there are large differences across the CMIP5 ensemble. In some models the step response function never changes sign and cooling persists, while in other GCMs the SST anomaly crosses over from negative to positive values only 3 years after a step increase in the SAM. This intermodel diversity can be related to differences in the models' climatological thermal ocean stratification in the region of seasonal sea ice around Antarctica. Exploiting this relationship, we use observational data for the time-mean meridional and vertical temperature gradients to constrain the real Southern Ocean response to SAM on fast and slow timescales.

  17. The role of cool-flame chemistry in quasi-steady combustion and extinction of n-heptane droplets

    NASA Astrophysics Data System (ADS)

    Paczko, Guenter; Peters, Norbert; Seshadri, Kalyanasundaram; Williams, Forman Arthur

    2014-09-01

    Experiments on the combustion of large n-heptane droplets, performed by the National Aeronautics and Space Administration in the International Space Station, revealed a second stage of continued quasi-steady burning, supported by low-temperature chemistry, that follows radiative extinction of the first stage of burning, which is supported by normal hot-flame chemistry. The second stage of combustion experienced diffusive extinction, after which a large vapour cloud was observed to form around the droplet. In the present work, a 770-step reduced chemical-kinetic mechanism and a new 62-step skeletal chemical-kinetic mechanism, developed as an extension of an earlier 56-step mechanism, are employed to calculate the droplet burning rates, flame structures, and extinction diameters for this cool-flame regime. The calculations are performed for quasi-steady burning with the mixture fraction as the independent variable, which is then related to the physical variables of droplet combustion. The predictions with the new mechanism, which agree well with measured autoignition times, reveal that, in decreasing order of abundance, H2O, CO, H2O2, CH2O, and C2H4 are the principal reaction products during the low-temperature stage and that, during this stage, there is substantial leakage of n-heptane and O2 through the flame, and very little production of CO2 with no soot in the mechanism. The fuel leakage has been suggested to be the source of the observed vapour cloud that forms after flame extinction. While the new skeletal chemical-kinetic mechanism facilitates understanding of the chemical kinetics and predicts ignition times well, its predicted droplet diameters at extinction are appreciably larger than observed experimentally, but predictions with the 770-step reduced chemical-kinetic mechanism are in reasonably good agreement with experiment. The computations show how the key ketohydroperoxide compounds control the diffusion-flame structure and its extinction.

  18. An Automated Method of Scanning Probe Microscopy (SPM) Data Analysis and Reactive Site Tracking for Mineral-Water Interface Reactions Observed at the Nanometer Scale

    NASA Astrophysics Data System (ADS)

    Campbell, B. D.; Higgins, S. R.

    2008-12-01

    Developing a method for bridging the gap between macroscopic and microscopic measurements of reaction kinetics at the mineral-water interface has important implications in geological and chemical fields. Investigating these reactions on the nanometer scale with SPM is often limited by image analysis and data extraction due to the large quantity of data usually obtained in SPM experiments. Here we present a computer algorithm for automated analysis of mineral-water interface reactions. This algorithm automates the analysis of sequential SPM images by identifying the kinetically active surface sites (i.e., step edges), and by tracking the displacement of these sites from image to image. The step edge positions in each image are readily identified and tracked through time by a standard edge detection algorithm followed by statistical analysis on the Hough Transform of the edge-mapped image. By quantifying this displacement as a function of time, the rate of step edge displacement is determined. Furthermore, the total edge length, also determined from analysis of the Hough Transform, combined with the computed step speed, yields the surface area normalized rate of the reaction. The algorithm was applied to a study of the spiral growth of the calcite(104) surface from supersaturated solutions, yielding results almost 20 times faster than performing this analysis by hand, with results being statistically similar for both analysis methods. This advance in analysis of kinetic data from SPM images will facilitate the building of experimental databases on the microscopic kinetics of mineral-water interface reactions.

  19. Real-time traffic sign detection and recognition

    NASA Astrophysics Data System (ADS)

    Herbschleb, Ernst; de With, Peter H. N.

    2009-01-01

    The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput

  20. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.

  1. Lead Selenide Colloidal Quantum Dot Solar Cells Achieving High Open-Circuit Voltage with One-Step Deposition Strategy.

    PubMed

    Zhang, Yaohong; Wu, Guohua; Ding, Chao; Liu, Feng; Yao, Yingfang; Zhou, Yong; Wu, Congping; Nakazawa, Naoki; Huang, Qingxun; Toyoda, Taro; Wang, Ruixiang; Hayase, Shuzi; Zou, Zhigang; Shen, Qing

    2018-06-18

    Lead selenide (PbSe) colloidal quantum dots (CQDs) are considered to be a strong candidate for high-efficiency colloidal quantum dot solar cells (CQDSCs) due to its efficient multiple exciton generation. However, currently, even the best PbSe CQDSCs can only display open-circuit voltage ( V oc ) about 0.530 V. Here, we introduce a solution-phase ligand exchange method to prepare PbI 2 -capped PbSe (PbSe-PbI 2 ) CQD inks, and for the first time, the absorber layer of PbSe CQDSCs was deposited in one step by using this PbSe-PbI 2 CQD inks. One-step-deposited PbSe CQDs absorber layer exhibits fast charge transfer rate, reduced energy funneling, and low trap assisted recombination. The champion large-area (active area is 0.35 cm 2 ) PbSe CQDSCs fabricated with one-step PbSe CQDs achieve a power conversion efficiency (PCE) of 6.0% and a V oc of 0.616 V, which is the highest V oc among PbSe CQDSCs reported to date.

  2. Martian stepped-delta formation by rapid water release.

    PubMed

    Kraal, Erin R; van Dijk, Maurits; Postma, George; Kleinhans, Maarten G

    2008-02-21

    Deltas and alluvial fans preserved on the surface of Mars provide an important record of surface water flow. Understanding how surface water flow could have produced the observed morphology is fundamental to understanding the history of water on Mars. To date, morphological studies have provided only minimum time estimates for the longevity of martian hydrologic events, which range from decades to millions of years. Here we use sand flume studies to show that the distinct morphology of martian stepped (terraced) deltas could only have originated from a single basin-filling event on a timescale of tens of years. Stepped deltas therefore provide a minimum and maximum constraint on the duration and magnitude of some surface flows on Mars. We estimate that the amount of water required to fill the basin and deposit the delta is comparable to the amount of water discharged by large terrestrial rivers, such as the Mississippi. The massive discharge, short timescale, and the associated short canyon lengths favour the hypothesis that stepped fans are terraced delta deposits draped over an alluvial fan and formed by water released suddenly from subsurface storage.

  3. A combined optical, SEM and STM study of growth spirals on the polytypic cadmium iodide crystals

    NASA Astrophysics Data System (ADS)

    Singh, Rajendra; Samanta, S. B.; Narlikar, A. V.; Trigunayat, G. C.

    2000-05-01

    Some novel results of a combined sequential study of growth spirals on the basal surface of the richly polytypic CdI 2 crystals by optical microscopy, scanning electron microscopy (SEM) and scanning tunneling microscopy (STM) are presented and discussed. Under the high resolution and magnification achieved in the scanning electron microscope, the growth steps of large heights seen in the optical micrographs are found to have a large number of additional steps of smaller heights existing between any two adjacent large height growth steps. When further seen by a scanning tunneling microscope, which provides still higher resolution, sequences of unit substeps, each of height equal to the unit cell height of the underlying polytype, are revealed to exist on the surface. Several large steps also lie between the unit steps, with heights equal to an integral multiple of either the unit cell height of the underlying polytype or the thickness of a molecular sheet I-Cd-I. It is suggested that initially a giant screw dislocation may form by brittle fracture of the crystal platelet, which may gradually decompose into numerous unit dislocations during subsequent crystal growth.

  4. Optimization of the Army’s Fast Neutron Moderator for Radiography

    DTIC Science & Technology

    2013-02-26

    thermal neutron flux from a commercially available high-energy D-T neutron generator. This paper details the steps taken to increase exposure rates...experiment was to have increased thermal neutron flux rates and shorter exposure times than previously achieved. Additional technology developments...This reduced the thermalizing efficiency of the moderator at higher energies, resulted in a large loss of neutron flux at the image plane, and

  5. Trend assessment: applications for hydrology and climate research

    NASA Astrophysics Data System (ADS)

    Kallache, M.; Rust, H. W.; Kropp, J.

    2005-02-01

    The assessment of trends in climatology and hydrology still is a matter of debate. Capturing typical properties of time series, like trends, is highly relevant for the discussion of potential impacts of global warming or flood occurrences. It provides indicators for the separation of anthropogenic signals and natural forcing factors by distinguishing between deterministic trends and stochastic variability. In this contribution river run-off data from gauges in Southern Germany are analysed regarding their trend behaviour by combining a deterministic trend component and a stochastic model part in a semi-parametric approach. In this way the trade-off between trend and autocorrelation structure can be considered explicitly. A test for a significant trend is introduced via three steps: First, a stochastic fractional ARIMA model, which is able to reproduce short-term as well as long-term correlations, is fitted to the empirical data. In a second step, wavelet analysis is used to separate the variability of small and large time-scales assuming that the trend component is part of the latter. Finally, a comparison of the overall variability to that restricted to small scales results in a test for a trend. The extraction of the large-scale behaviour by wavelet analysis provides a clue concerning the shape of the trend.

  6. Assessment of automatic ligand building in ARP/wARP.

    PubMed

    Evrard, Guillaume X; Langer, Gerrit G; Perrakis, Anastassis; Lamzin, Victor S

    2007-01-01

    The efficiency of the ligand-building module of ARP/wARP version 6.1 has been assessed through extensive tests on a large variety of protein-ligand complexes from the PDB, as available from the Uppsala Electron Density Server. Ligand building in ARP/wARP involves two main steps: automatic identification of the location of the ligand and the actual construction of its atomic model. The first step is most successful for large ligands. The second step, ligand construction, is more powerful with X-ray data at high resolution and ligands of small to medium size. Both steps are successful for ligands with low to moderate atomic displacement parameters. The results highlight the strengths and weaknesses of both the method of ligand building and the large-scale validation procedure and help to identify means of further improvement.

  7. Spectral Collocation Time-Domain Modeling of Diffractive Optical Elements

    NASA Astrophysics Data System (ADS)

    Hesthaven, J. S.; Dinesen, P. G.; Lynov, J. P.

    1999-11-01

    A spectral collocation multi-domain scheme is developed for the accurate and efficient time-domain solution of Maxwell's equations within multi-layered diffractive optical elements. Special attention is being paid to the modeling of out-of-plane waveguide couplers. Emphasis is given to the proper construction of high-order schemes with the ability to handle very general problems of considerable geometric and material complexity. Central questions regarding efficient absorbing boundary conditions and time-stepping issues are also addressed. The efficacy of the overall scheme for the time-domain modeling of electrically large, and computationally challenging, problems is illustrated by solving a number of plane as well as non-plane waveguide problems.

  8. Finite-difference fluid dynamics computer mathematical models for the design and interpretation of experiments for space flight. [atmospheric general circulation experiment, convection in a float zone, and the Bridgman-Stockbarger crystal growing system

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.; Fowlis, W. W.; Miller, T. L.

    1984-01-01

    Numerical methods are used to design a spherical baroclinic flow model experiment of the large scale atmosphere flow for Spacelab. The dielectric simulation of radial gravity is only dominant in a low gravity environment. Computer codes are developed to study the processes at work in crystal growing systems which are also candidates for space flight. Crystalline materials rarely achieve their potential properties because of imperfections and component concentration variations. Thermosolutal convection in the liquid melt can be the cause of these imperfections. Such convection is suppressed in a low gravity environment. Two and three dimensional finite difference codes are being used for this work. Nonuniform meshes and implicit iterative methods are used. The iterative method for steady solutions is based on time stepping but has the options of different time steps for velocity and temperature and of a time step varying smoothly with position according to specified powers of the mesh spacings. This allows for more rapid convergence. The code being developed for the crystal growth studies allows for growth of the crystal as the solid-liquid interface. The moving interface is followed using finite differences; shape variations are permitted. For convenience in applying finite differences in the solid and liquid, a time dependent coordinate transformation is used to make this interface a coordinate surface.

  9. Smoothing-based compressed state Kalman filter for joint state-parameter estimation: Applications in reservoir characterization and CO2 storage monitoring

    NASA Astrophysics Data System (ADS)

    Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.

    2017-08-01

    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.

  10. A pre-breeding screening program for transgenic boars based on fluorescence in situ hybridization assay.

    PubMed

    Bou, Gerelchimeg; Sun, Mingju; Lv, Ming; Zhu, Jiang; Li, Hui; Wang, Juan; Li, Lu; Liu, Zhongfeng; Zheng, Zhong; He, Wenteng; Kong, Qingran; Liu, Zhonghua

    2014-08-01

    For efficient transgenic herd expansion, only the transgenic animals that possess the ability to transmit transgene into next generation are considered for breeding. However, for transgenic pig, practically lacking a pre-breeding screening program, time, labor and money is always wasted to maintain non-transgenic pigs, low or null transgenic transmission pigs and the related fruitless gestations. Developing a pre-breeding screening program would make the transgenic herd expansion more economical and efficient. In this technical report, we proposed a three-step pre-breeding screening program for transgenic boars simply through combining the fluorescence in situ hybridization (FISH) assay with the common pre-breeding screening workflow. In the first step of screening, combined with general transgenic phenotype analysis, FISH is used to identify transgenic boars. In the second step of screening, combined with conventional semen test, FISH is used to detect transgenic sperm, thus to identify the individuals producing high quality semen and transgenic sperm. In the third step of screening, FISH is used to assess the in vitro fertilization embryos, thus finally to identify the individuals with the ability to produce transgenic embryos. By this three-step screening, the non-transgenic boars and boars with no ability to produce transgenic sperm or transgenic embryos would be eliminated; therefore only those boars could produce transgenic offspring are maintained and used for breeding and herd expansion. It is the first time a systematic pre-breeding screening program is proposed for transgenic pigs. This program might also be applied in other transgenic large animals, and provide an economical and efficient strategy for herd expansion.

  11. Elite sprinting: are athletes individually step-frequency or step-length reliant?

    PubMed

    Salo, Aki I T; Bezodis, Ian N; Batterham, Alan M; Kerwin, David G

    2011-06-01

    The aim of this study was to investigate the step characteristics among the very best 100-m sprinters in the world to understand whether the elite athletes are individually more reliant on step frequency (SF) or step length (SL). A total of 52 male elite-level 100-m races were recorded from publicly available television broadcasts, with 11 analyzed athletes performing in 10 or more races. For each run of each athlete, the average SF and SL over the whole 100-m distance was analyzed. To determine any SF or SL reliance for an individual athlete, the 90% confidence interval (CI) for the difference between the SF-time versus SL-time relationships was derived using a criterion nonparametric bootstrapping technique. Athletes performed these races with various combinations of SF and SL reliance. Athlete A10 yielded the highest positive CI difference (SL reliance), with a value of 1.05 (CI = 0.50-1.53). The largest negative difference (SF reliance) occurred for athlete A11 as -0.60, with the CI range of -1.20 to 0.03. Previous studies have generally identified only one of these variables to be the main reason for faster running velocities. However, this study showed that there is a large variation of performance patterns among the elite athletes and, overall, SF or SL reliance is a highly individual occurrence. It is proposed that athletes should take this reliance into account in their training, with SF-reliant athletes needing to keep their neural system ready for fast leg turnover and SL-reliant athletes requiring more concentration on maintaining strength levels.

  12. Declines in the dissolved organic carbon (DOC) concentration and flux from the UK

    NASA Astrophysics Data System (ADS)

    Worrall, Fred; Howden, Nicholas J. K.; Burt, Tim P.; Bartlett, Rebecca

    2018-01-01

    Increased concentrations of dissolved organic carbon (DOC) have been reported for many catchments across the northern hemisphere. Hypotheses to explain the increase have varied (eg. increasing air temperature or recovery from acidification) but one test of alternative hypotheses is the trend over the recent decade, with the competing hypotheses predicting: continuing increase; the rate of increase declining with time; and even decrease in concentration. In this study, records of DOC concentration in non-tidal rivers across the UK were examined for the period 2003-2012. The study found that: Of the 62 decade-long concentration trends that could be examined, 3 showed a significant increase, 17 experienced no significant change and 42 showed a significant decrease; in 28 of the 42 significant decreases, a significant step change was apparent with step changes being a decrease in concentration in every case. Of the 118 sites where annual flux and concentration records were available from 1974, 28 showed a significant step change down in flux and 52 showed a step down in concentration. The modal year of the step changes was 2000 with no step changes observed before 1982. At the UK national scale, DOC flux peaked in 2005 at 1354 ktonnes C/yr (5.55 tonnes C/km2/yr) but has declined since. The study suggests that there is a disconnection between DOC records from large catchments at their tidal limits and complementary records from headwater catchments, which means that mechanisms believed to be driving increases in DOC concentrations in headwaters will not necessarily be those controlling trends in DOC concentration further downstream. We propose that the changes identified here have been driven by changes in in-stream processing and changes brought about by the Urban Waste Water Treatment Directive. Therefore, signals identified in headwater catchments may bear little relation to those observed in large rivers much further downstream and vice versa.

  13. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE PAGES

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; ...

    2016-11-07

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  14. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  15. Time versus energy minimization migration strategy varies with body size and season in long-distance migratory shorebirds.

    PubMed

    Zhao, Meijuan; Christie, Maureen; Coleman, Jonathan; Hassell, Chris; Gosbell, Ken; Lisovski, Simeon; Minton, Clive; Klaassen, Marcel

    2017-01-01

    Migrants have been hypothesised to use different migration strategies between seasons: a time-minimization strategy during their pre-breeding migration towards the breeding grounds and an energy-minimization strategy during their post-breeding migration towards the wintering grounds. Besides season, we propose body size as a key factor in shaping migratory behaviour. Specifically, given that body size is expected to correlate negatively with maximum migration speed and that large birds tend to use more time to complete their annual life-history events (such as moult, breeding and migration), we hypothesise that large-sized species are time stressed all year round. Consequently, large birds are not only likely to adopt a time-minimization strategy during pre-breeding migration, but also during post-breeding migration, to guarantee a timely arrival at both the non-breeding (i.e. wintering) and breeding grounds. We tested this idea using individual tracks across six long-distance migratory shorebird species (family Scolopacidae) along the East Asian-Australasian Flyway varying in size from 50 g to 750 g lean body mass. Migration performance was compared between pre- and post-breeding migration using four quantifiable migratory behaviours that serve to distinguish between a time- and energy-minimization strategy, including migration speed, number of staging sites, total migration distance and step length from one site to the next. During pre- and post-breeding migration, the shorebirds generally covered similar distances, but they tended to migrate faster, used fewer staging sites, and tended to use longer step lengths during pre-breeding migration. These seasonal differences are consistent with the prediction that a time-minimization strategy is used during pre-breeding migration, whereas an energy-minimization strategy is used during post-breeding migration. However, there was also a tendency for the seasonal difference in migration speed to progressively disappear with an increase in body size, supporting our hypothesis that larger species tend to use time-minimization strategies during both pre- and post-breeding migration. Our study highlights that body size plays an important role in shaping migratory behaviour. Larger migratory bird species are potentially time constrained during not only the pre- but also the post-breeding migration. Conservation of their habitats during both seasons may thus be crucial for averting further population declines.

  16. Single-step method for β-galactosidase assays in Escherichia coli using a 96-well microplate reader.

    PubMed

    Schaefer, Jorrit; Jovanovic, Goran; Kotta-Loizou, Ioly; Buck, Martin

    2016-06-15

    Historically, the lacZ gene is one of the most universally used reporters of gene expression in molecular biology. Its activity can be quantified using an artificial substrate, o-nitrophenyl-ß-d-galactopyranoside (ONPG). However, the traditional method for measuring LacZ activity (first described by J. H. Miller in 1972) can be challenging for a large number of samples, is prone to variability, and involves hazardous compounds for lysis (e.g., chloroform, toluene). Here we describe a single-step assay using a 96-well microplate reader with a proven alternative cell permeabilization method. This modified protocol reduces handling time by 90%. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography.

    PubMed

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf

    2013-08-01

    The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.

  18. Study of residue type defect formation mechanism and the effect of advanced defect reduction (ADR) rinse process

    NASA Astrophysics Data System (ADS)

    Arima, Hiroshi; Yoshida, Yuichi; Yoshihara, Kosuke; Shibata, Tsuyoshi; Kushida, Yuki; Nakagawa, Hiroki; Nishimura, Yukio; Yamaguchi, Yoshikazu

    2009-03-01

    Residue type defect is one of yield detractors in lithography process. It is known that occurrence of the residue type defect is dependent on resist development process and the defect is reduced by optimized rinsing condition. However, the defect formation is affected by resist materials and substrate conditions. Therefore, it is necessary to optimize the development process condition by each mask level. Those optimization steps require a large amount of time and effort. The formation mechanism is investigated from viewpoint of both material and process. The defect formation is affected by resist material types, substrate condition and development process condition (D.I.W. rinse step). Optimized resist formulation and new rinse technology significantly reduce the residue type defect.

  19. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    PubMed Central

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf

    2013-01-01

    The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484

  20. Unique characteristics of motor adaptation during walking in young children.

    PubMed

    Musselman, Kristin E; Patrick, Susan K; Vasudevan, Erin V L; Bastian, Amy J; Yang, Jaynie F

    2011-05-01

    Children show precocious ability in the learning of languages; is this the case with motor learning? We used split-belt walking to probe motor adaptation (a form of motor learning) in children. Data from 27 children (ages 8-36 mo) were compared with those from 10 adults. Children walked with the treadmill belts at the same speed (tied belt), followed by walking with the belts moving at different speeds (split belt) for 8-10 min, followed again by tied-belt walking (postsplit). Initial asymmetries in temporal coordination (i.e., double support time) induced by split-belt walking were slowly reduced, with most children showing an aftereffect (i.e., asymmetry in the opposite direction to the initial) in the early postsplit period, indicative of learning. In contrast, asymmetries in spatial coordination (i.e., center of oscillation) persisted during split-belt walking and no aftereffect was seen. Step length, a measure of both spatial and temporal coordination, showed intermediate effects. The time course of learning in double support and step length was slower in children than in adults. Moreover, there was a significant negative correlation between the size of the initial asymmetry during early split-belt walking (called error) and the aftereffect for step length. Hence, children may have more difficulty learning when the errors are large. The findings further suggest that the mechanisms controlling temporal and spatial adaptation are different and mature at different times.

  1. Real‐time monitoring and control of the load phase of a protein A capture step

    PubMed Central

    Rüdt, Matthias; Brestrich, Nina; Rolinger, Laura

    2016-01-01

    ABSTRACT The load phase in preparative Protein A capture steps is commonly not controlled in real‐time. The load volume is generally based on an offline quantification of the monoclonal antibody (mAb) prior to loading and on a conservative column capacity determined by resin‐life time studies. While this results in a reduced productivity in batch mode, the bottleneck of suitable real‐time analytics has to be overcome in order to enable continuous mAb purification. In this study, Partial Least Squares Regression (PLS) modeling on UV/Vis absorption spectra was applied to quantify mAb in the effluent of a Protein A capture step during the load phase. A PLS model based on several breakthrough curves with variable mAb titers in the HCCF was successfully calibrated. The PLS model predicted the mAb concentrations in the effluent of a validation experiment with a root mean square error (RMSE) of 0.06 mg/mL. The information was applied to automatically terminate the load phase, when a product breakthrough of 1.5 mg/mL was reached. In a second part of the study, the sensitivity of the method was further increased by only considering small mAb concentrations in the calibration and by subtracting an impurity background signal. The resulting PLS model exhibited a RMSE of prediction of 0.01 mg/mL and was successfully applied to terminate the load phase, when a product breakthrough of 0.15 mg/mL was achieved. The proposed method has hence potential for the real‐time monitoring and control of capture steps at large scale production. This might enhance the resin capacity utilization, eliminate time‐consuming offline analytics, and contribute to the realization of continuous processing. Biotechnol. Bioeng. 2017;114: 368–373. © 2016 The Authors. Biotechnology and Bioengineering published by Wiley Periodicals, Inc. PMID:27543789

  2. On the large eddy simulation of turbulent flows in complex geometry

    NASA Technical Reports Server (NTRS)

    Ghosal, Sandip

    1993-01-01

    Application of the method of Large Eddy Simulation (LES) to a turbulent flow consists of three separate steps. First, a filtering operation is performed on the Navier-Stokes equations to remove the small spatial scales. The resulting equations that describe the space time evolution of the 'large eddies' contain the subgrid-scale (sgs) stress tensor that describes the effect of the unresolved small scales on the resolved scales. The second step is the replacement of the sgs stress tensor by some expression involving the large scales - this is the problem of 'subgrid-scale modeling'. The final step is the numerical simulation of the resulting 'closed' equations for the large scale fields on a grid small enough to resolve the smallest of the large eddies, but still much larger than the fine scale structures at the Kolmogorov length. In dividing a turbulent flow field into 'large' and 'small' eddies, one presumes that a cut-off length delta can be sensibly chosen such that all fluctuations on a scale larger than delta are 'large eddies' and the remainder constitute the 'small scale' fluctuations. Typically, delta would be a length scale characterizing the smallest structures of interest in the flow. In an inhomogeneous flow, the 'sensible choice' for delta may vary significantly over the flow domain. For example, in a wall bounded turbulent flow, most statistical averages of interest vary much more rapidly with position near the wall than far away from it. Further, there are dynamically important organized structures near the wall on a scale much smaller than the boundary layer thickness. Therefore, the minimum size of eddies that need to be resolved is smaller near the wall. In general, for the LES of inhomogeneous flows, the width of the filtering kernel delta must be considered to be a function of position. If a filtering operation with a nonuniform filter width is performed on the Navier-Stokes equations, one does not in general get the standard large eddy equations. The complication is caused by the fact that a filtering operation with a nonuniform filter width in general does not commute with the operation of differentiation. This is one of the issues that we have looked at in detail as it is basic to any attempt at applying LES to complex geometry flows. Our principal findings are summarized.

  3. Lagrangian coherent structures during combustion instability in a premixed-flame backward-step combustor.

    PubMed

    Sampath, Ramgopal; Mathur, Manikandan; Chakravarthy, Satyanarayanan R

    2016-12-01

    This paper quantitatively examines the occurrence of large-scale coherent structures in the flow field during combustion instability in comparison with the flow-combustion-acoustic system when it is stable. For this purpose, the features in the recirculation zone of the confined flow past a backward-facing step are studied in terms of Lagrangian coherent structures. The experiments are conducted at a Reynolds number of 18600 and an equivalence ratio of 0.9 of the premixed fuel-air mixture for two combustor lengths, the long duct corresponding to instability and the short one to the stable case. Simultaneous measurements of the velocity field using time-resolved particle image velocimetry and the CH^{*} chemiluminescence of the flame along with pressure time traces are obtained. The extracted ridges of the finite-time Lyapunov exponent (FTLE) fields delineate dynamically distinct regions of the flow field. The presence of large-scale vortical structures and their modulation over different time instants are well captured by the FTLE ridges for the long combustor where high-amplitude acoustic oscillations are self-excited. In contrast, small-scale vortices signifying Kelvin-Helmholtz instability are observed in the short duct case. Saddle-type flow features are found to separate the distinct flow structures for both combustor lengths. The FTLE ridges are found to align with the flame boundaries in the upstream regions, whereas farther downstream, the alignment is weaker due to dilatation of the flow by the flame's heat release. Specifically, the FTLE ridges encompass the flame curl-up for both the combustor lengths, and thus act as the surrogate flame boundaries. The flame is found to propagate upstream from an earlier vortex roll-up to a newer one along the backward-time FTLE ridge connecting the two structures.

  4. Evaluating the benefits of digital pathology implementation: Time savings in laboratory logistics.

    PubMed

    Baidoshvili, Alexi; Bucur, Anca; van Leeuwen, Jasper; van der Laak, Jeroen; Kluin, Philip; van Diest, Paul J

    2018-06-20

    The benefits of digital pathology for workflow improvement and thereby cost savings in pathology, at least partly outweighing investment costs, are increasingly recognized. Successful implementations in a variety of scenarios start to demonstrate cost benefits of digital pathology for both research and routine diagnostics, contributing to a sound business case encouraging further adoption. To further support new adopters, there is still a need for detailed assessment of the impact this technology has on the relevant pathology workflows with emphasis on time saving. To assess the impact of digital pathology adoption on logistic laboratory tasks (i.e. not including pathologists' time for diagnosis making) in LabPON, a large regional pathology laboratory in The Netherlands. To quantify the benefits of digitization we analyzed the differences between the traditional analog and new digital workflows, carried out detailed measurements of all relevant steps in key analog and digital processes, and compared time spent. We modeled and assessed the logistic savings in five workflows: (1) Routine diagnosis, (2) Multi-disciplinary meeting, (3) External revision requests, (4) Extra stainings and (5) External consultation. On average over 19 working hours were saved on a typical day by working digitally, with the highest savings in routine diagnosis and multi-disciplinary meeting workflows. By working digitally, a significant amount of time could be saved in a large regional pathology lab with a typical case mix. We also present the data in each workflow per task and concrete logistic steps to allow extrapolation to the context and case mix of other laboratories. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. QuickRNASeq lifts large-scale RNA-seq data analyses to the next level of automation and interactive visualization.

    PubMed

    Zhao, Shanrong; Xi, Li; Quan, Jie; Xi, Hualin; Zhang, Ying; von Schack, David; Vincent, Michael; Zhang, Baohong

    2016-01-08

    RNA sequencing (RNA-seq), a next-generation sequencing technique for transcriptome profiling, is being increasingly used, in part driven by the decreasing cost of sequencing. Nevertheless, the analysis of the massive amounts of data generated by large-scale RNA-seq remains a challenge. Multiple algorithms pertinent to basic analyses have been developed, and there is an increasing need to automate the use of these tools so as to obtain results in an efficient and user friendly manner. Increased automation and improved visualization of the results will help make the results and findings of the analyses readily available to experimental scientists. By combing the best open source tools developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we have implemented QuickRNASeq, a pipeline for large-scale RNA-seq data analyses and visualization. The QuickRNASeq workflow consists of three main steps. In Step #1, each individual sample is processed, including mapping RNA-seq reads to a reference genome, counting the numbers of mapped reads, quality control of the aligned reads, and SNP (single nucleotide polymorphism) calling. Step #1 is computationally intensive, and can be processed in parallel. In Step #2, the results from individual samples are merged, and an integrated and interactive project report is generated. All analyses results in the report are accessible via a single HTML entry webpage. Step #3 is the data interpretation and presentation step. The rich visualization features implemented here allow end users to interactively explore the results of RNA-seq data analyses, and to gain more insights into RNA-seq datasets. In addition, we used a real world dataset to demonstrate the simplicity and efficiency of QuickRNASeq in RNA-seq data analyses and interactive visualizations. The seamless integration of automated capabilites with interactive visualizations in QuickRNASeq is not available in other published RNA-seq pipelines. The high degree of automation and interactivity in QuickRNASeq leads to a substantial reduction in the time and effort required prior to further downstream analyses and interpretation of the analyses findings. QuickRNASeq advances primary RNA-seq data analyses to the next level of automation, and is mature for public release and adoption.

  6. Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)

    1998-01-01

    Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.

  7. Teaching with Real-time Earthquake Data in jAmaSeis

    NASA Astrophysics Data System (ADS)

    Bravo, T. K.; Coleman, B.; Taber, J.

    2011-12-01

    Earthquakes can capture the attention of students and inspire them to explore the Earth. The Incorporated Research Institutions in Seismology (IRIS) and Moravian College are collaborating to develop cross-platform software (jAmaSeis) that enables students to access real-time earthquake waveform data. Users can record their own data from several different types of educational seismometers, and they can obtain data in real-time from other jAmaseis users nationwide. Additionally, the ability to stream data from the IRIS Data Management Center (DMC) is under development. Once real-time data is obtained, users of jAmaseis can study seismological concepts in the classroom. The user interface of the software is carefully designed to lead students through the steps to interrogate seismic data following a large earthquake. Users can process data to determine characteristics of seismograms such as time of occurrence, distance from the epicenter to the station, magnitude, and location (via triangulation). Along the way, the software provides graphical clues to assist student interpretations. In addition to the inherent pedagogical features of the software, IRIS provides pre-packaged data and instructional activities to help students learn the analysis steps. After using these activities, students can apply their skills to interpret seismic waves from their own real-time data.

  8. The Relaxation of Vicinal (001) with ZigZag [110] Steps

    NASA Astrophysics Data System (ADS)

    Hawkins, Micah; Hamouda, Ajmi Bh; González-Cabrera, Diego Luis; Einstein, Theodore L.

    2012-02-01

    This talk presents a kinetic Monte Carlo study of the relaxation dynamics of [110] steps on a vicinal (001) simple cubic surface. This system is interesting because [110] steps have different elementary excitation energetics and favor step diffusion more than close-packed [100] steps. In this talk we show how this leads to relaxation dynamics showing greater fluctuations on a shorter time scale for [110] steps as well as 2-bond breaking processes being rate determining in contrast to 3-bond breaking processes for [100] steps. The existence of a steady state is shown via the convergence of terrace width distributions at times much longer than the relaxation time. In this time regime excellent fits to the modified generalized Wigner distribution (as well as to the Berry-Robnik model when steps can overlap) were obtained. Also, step-position correlation function data show diffusion-limited increase for small distances along the step as well as greater average step displacement for zigzag steps compared to straight steps for somewhat longer distances along the step. Work supported by NSF-MRSEC Grant DMR 05-20471 as well as a DOE-CMCSN Grant.

  9. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time

    PubMed Central

    Lu, Yuhua; Liu, Qian

    2018-01-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications. PMID:29515870

  10. Step-and-Repeat Nanoimprint-, Photo- and Laser Lithography from One Customised CNC Machine.

    PubMed

    Greer, Andrew Im; Della-Rosa, Benoit; Khokhar, Ali Z; Gadegaard, Nikolaj

    2016-12-01

    The conversion of a computer numerical control machine into a nanoimprint step-and-repeat tool with additional laser- and photolithography capacity is documented here. All three processes, each demonstrated on a variety of photoresists, are performed successfully and analysed so as to enable the reader to relate their known lithography process(es) to the findings. Using the converted tool, 1 cm(2) of nanopattern may be exposed in 6 s, over 3300 times faster than the electron beam equivalent. Nanoimprint tools are commercially available, but these can cost around 1000 times more than this customised computer numerical control (CNC) machine. The converted equipment facilitates rapid production and large area micro- and nanoscale research on small grants, ultimately enabling faster and more diverse growth in this field of science. In comparison to commercial tools, this converted CNC also boasts capacity to handle larger substrates, temperature control and active force control, up to ten times more curing dose and compactness. Actual devices are fabricated using the machine including an expanded nanotopographic array and microfluidic PDMS Y-channel mixers.

  11. Computational Study of Axisymmetric Off-Design Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DalBello, Teryn; Georgiadis, Nicholas; Yoder, Dennis; Keith, Theo

    2003-01-01

    Computational Fluid Dynamics (CFD) analyses of axisymmetric circular-arc boattail nozzles operating off-design at transonic Mach numbers have been completed. These computations span the very difficult transonic flight regime with shock-induced separations and strong adverse pressure gradients. External afterbody and internal nozzle pressure distributions computed with the Wind code are compared with experimental data. A range of turbulence models were examined, including the Explicit Algebraic Stress model. Computations have been completed at freestream Mach numbers of 0.9 and 1.2, and nozzle pressure ratios (NPR) of 4 and 6. Calculations completed with variable time-stepping (steady-state) did not converge to a true steady-state solution. Calculations obtained using constant timestepping (timeaccurate) indicate less variations in flow properties compared with steady-state solutions. This failure to converge to a steady-state solution was the result of using variable time-stepping with large-scale separations present in the flow. Nevertheless, time-averaged boattail surface pressure coefficient and internal nozzle pressures show reasonable agreement with experimental data. The SST turbulence model demonstrates the best overall agreement with experimental data.

  12. Step-and-Repeat Nanoimprint-, Photo- and Laser Lithography from One Customised CNC Machine

    NASA Astrophysics Data System (ADS)

    Greer, Andrew IM; Della-Rosa, Benoit; Khokhar, Ali Z.; Gadegaard, Nikolaj

    2016-03-01

    The conversion of a computer numerical control machine into a nanoimprint step-and-repeat tool with additional laser- and photolithography capacity is documented here. All three processes, each demonstrated on a variety of photoresists, are performed successfully and analysed so as to enable the reader to relate their known lithography process(es) to the findings. Using the converted tool, 1 cm2 of nanopattern may be exposed in 6 s, over 3300 times faster than the electron beam equivalent. Nanoimprint tools are commercially available, but these can cost around 1000 times more than this customised computer numerical control (CNC) machine. The converted equipment facilitates rapid production and large area micro- and nanoscale research on small grants, ultimately enabling faster and more diverse growth in this field of science. In comparison to commercial tools, this converted CNC also boasts capacity to handle larger substrates, temperature control and active force control, up to ten times more curing dose and compactness. Actual devices are fabricated using the machine including an expanded nanotopographic array and microfluidic PDMS Y-channel mixers.

  13. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time.

    PubMed

    Xu, Lang; Lu, Yuhua; Liu, Qian

    2018-02-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications.

  14. Construction of large signaling pathways using an adaptive perturbation approach with phosphoproteomic data.

    PubMed

    Melas, Ioannis N; Mitsos, Alexander; Messinis, Dimitris E; Weiss, Thomas S; Rodriguez, Julio-Saez; Alexopoulos, Leonidas G

    2012-04-01

    Construction of large and cell-specific signaling pathways is essential to understand information processing under normal and pathological conditions. On this front, gene-based approaches offer the advantage of large pathway exploration whereas phosphoproteomic approaches offer a more reliable view of pathway activities but are applicable to small pathway sizes. In this paper, we demonstrate an experimentally adaptive approach to construct large signaling pathways from phosphoproteomic data within a 3-day time frame. Our approach--taking advantage of the fast turnaround time of the xMAP technology--is carried out in four steps: (i) screen optimal pathway inducers, (ii) select the responsive ones, (iii) combine them in a combinatorial fashion to construct a phosphoproteomic dataset, and (iv) optimize a reduced generic pathway via an Integer Linear Programming formulation. As a case study, we uncover novel players and their corresponding pathways in primary human hepatocytes by interrogating the signal transduction downstream of 81 receptors of interest and constructing a detailed model for the responsive part of the network comprising 177 species (of which 14 are measured) and 365 interactions.

  15. Adsorption dynamics of methyl violet onto granulated mesoporous carbon: Facile synthesis and adsorption kinetics.

    PubMed

    Kim, Yohan; Bae, Jiyeol; Park, Hosik; Suh, Jeong-Kwon; You, Young-Woo; Choi, Heechul

    2016-09-15

    A new and facile one-step synthesis method for preparing granulated mesoporous carbon (GMC) with three-dimensional spherical mesoporous symmetry is prepared to remove large molecular weight organic compounds in aqueous phase. GMC is synthesized in a single step using as-synthesized mesoporous carbon particles and organic binders through a simple and economical synthesis approach involving a simultaneous calcination and carbonization process. Characterization results obtained from SEM, XRD, as well as surface and porosity analysis indicate that the synthesized GMC has similar physical properties to those of the powdered mesoporous carbon and maintains the Brunauer-Emmett-Teller (BET) surface area and pore volume because the new synthesis method prevents the collapse of the pores during the granulation process. Batch adsorption experiments revealed GMC showed a substantial adsorption capacity (202.8 mg/g) for the removal of methyl violet as a target large molecular contaminant in aqueous phase. The mechanisms and dynamics modeling of GMC adsorption were also fully examined, which revealed that surface diffusion was rate limiting step on adsorption process of GMC. Adsorption kinetics of GMC enables 3 times faster than that of granular activated carbon in terms of surface diffusion coefficient. This is the first study, to the best of our knowledge, to synthesize GMC as an adsorbent for water purification by using facile granulation method and to investigate the adsorption kinetics and characteristics of GMC. This study introduces a new and simple method for the synthesis of GMC and reveals its adsorption characteristics for large molecular compounds in a water treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Building regional early flood warning systems by AI techniques

    NASA Astrophysics Data System (ADS)

    Chang, F. J.; Chang, L. C.; Amin, M. Z. B. M.

    2017-12-01

    Building early flood warning system is essential for the protection of the residents against flood hazards and make actions to mitigate the losses. This study implements AI technology for forecasting multi-step-ahead regional flood inundation maps during storm events. The methodology includes three major schemes: (1) configuring the self-organizing map (SOM) to categorize a large number of regional inundation maps into a meaningful topology; (2) building dynamic neural networks to forecast multi-step-ahead average inundated depths (AID); and (3) adjusting the weights of the selected neuron in the constructed SOM based on the forecasted AID to obtain real-time regional inundation maps. The proposed models are trained, and tested based on a large number of inundation data sets collected in regions with the most frequent and serious flooding in the river basin. The results appear that the SOM topological relationships between individual neurons and their neighbouring neurons are visible and clearly distinguishable, and the hybrid model can continuously provide multistep-ahead visible regional inundation maps with high resolution during storm events, which have relatively small RMSE values and high R2 as compared with numerical simulation data sets. The computing time is only few seconds, and thereby leads to real-time regional flood inundation forecasting and make early flood inundation warning system. We demonstrate that the proposed hybrid ANN-based model has a robust and reliable predictive ability and can be used for early warning to mitigate flood disasters.

  17. On the high frequency transfer of mechanical stimuli from the surface of the head to the macular neuroepithelium of the mouse.

    PubMed

    Jones, Timothy A; Lee, Choongheon; Gaines, G Christopher; Grant, J W Wally

    2015-04-01

    Vestibular macular sensors are activated by a shearing motion between the otoconial membrane and underlying receptor epithelium. Shearing motion and sensory activation in response to an externally induced head motion do not occur instantaneously. The mechanically reactive elastic and inertial properties of the intervening tissue introduce temporal constraints on the transfer of the stimulus to sensors. Treating the otoconial sensory apparatus as an overdamped second-order mechanical system, we measured the governing long time constant (Τ(L)) for stimulus transfer from the head surface to epithelium. This provided the basis to estimate the corresponding upper cutoff for the frequency response curve for mouse otoconial organs. A velocity step excitation was used as the forcing function. Hypothetically, the onset of the mechanical response to a step excitation follows an exponential rise having the form Vel(shear) = U(1-e(-t/TL)), where U is the applied shearing velocity step amplitude. The response time of the otoconial apparatus was estimated based on the activation threshold of macular neural responses to step stimuli having durations between 0.1 and 2.0 ms. Twenty adult C57BL/6 J mice were evaluated. Animals were anesthetized. The head was secured to a shaker platform using a non-invasive head clip or implanted skull screws. The shaker was driven to produce a theoretical forcing step velocity excitation at the otoconial organ. Vestibular sensory evoked potentials (VsEPs) were recorded to measure the threshold for macular neural activation. The duration of the applied step motion was reduced systematically from 2 to 0.1 ms and response threshold determined for each duration (nine durations). Hypothetically, the threshold of activation will increase according to the decrease in velocity transfer occurring at shorter step durations. The relationship between neural threshold and stimulus step duration was characterized. Activation threshold increased exponentially as velocity step duration decreased below 1.0 ms. The time constants associated with the exponential curve were Τ(L) = 0.50 ms for the head clip coupling and T(L) = 0.79 ms for skull screw preparation. These corresponded to upper -3 dB frequency cutoff points of approximately 318 and 201 Hz, respectively. T(L) ranged from 224 to 379 across individual animals using the head clip coupling. The findings were consistent with a second-order mass-spring mechanical system. Threshold data were also fitted to underdamped models post hoc. The underdamped fits suggested natural resonance frequencies on the order of 278 to 448 Hz as well as the idea that macular systems in mammals are less damped than generally acknowledged. Although estimated indirectly, it is argued that these time constants reflect largely if not entirely the mechanics of transfer to the sensory apparatus. The estimated governing time constant of 0.50 ms for composite data predicts high frequency cutoffs of at least 318 Hz for the intact otoconial apparatus of the mouse.

  18. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    PubMed Central

    Aburahma, Mona Hassan

    2015-01-01

    Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated. PMID:28975906

  19. A transient response analysis of the space shuttle vehicle during liftoff

    NASA Technical Reports Server (NTRS)

    Brunty, J. A.

    1990-01-01

    A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.

  20. Real-time fluorescence ligase chain reaction for sensitive detection of single nucleotide polymorphism based on fluorescence resonance energy transfer.

    PubMed

    Sun, Yueying; Lu, Xiaohui; Su, Fengxia; Wang, Limei; Liu, Chenghui; Duan, Xinrui; Li, Zhengping

    2015-12-15

    Most of practical methods for detection of single nucleotide polymorphism (SNP) need at least two steps: amplification (usually by PCR) and detection of SNP by using the amplification products. Ligase chain reaction (LCR) can integrate the amplification and allele discrimination in one step. However, the detection of LCR products still remains a great challenge for highly sensitive and quantitative SNP detection. Herein, a simple but robust strategy for real-time fluorescence LCR has been developed for highly sensitive and quantitative SNP detection. A pair of LCR probes are firstly labeled with a fluorophore and a quencher, respectively. When the pair of LCR probes are ligated in LCR, the fluorophore will be brought close to the quencher, and thus, the fluorescence will be specifically quenched by fluorescence resonance energy transfer (FRET). The decrease of fluorescence intensity resulted from FRET can be real-time monitored in the LCR process. With the proposed real-time fluorescence LCR assay, 10 aM DNA targets or 100 pg genomic DNA can be accurately determined and as low as 0.1% mutant DNA can be detected in the presence of a large excess of wild-type DNA, indicating the high sensitivity and specificity. The real-time measuring does not require the detection step after LCR and gives a wide dynamic range for detection of DNA targets (from 10 aM to 1 pM). As LCR has been widely used for detection of SNP, DNA methylation, mRNA and microRNA, the real-time fluorescence LCR assay shows great potential for various genetic analysis. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Time-dependent rheological behavior of natural polysaccharide xanthan gum solutions in interrupted shear and step-incremental/reductional shear flow fields

    NASA Astrophysics Data System (ADS)

    Lee, Ji-Seok; Song, Ki-Won

    2015-11-01

    The objective of the present study is to systematically elucidate the time-dependent rheological behavior of concentrated xanthan gum systems in complicated step-shear flow fields. Using a strain-controlled rheometer (ARES), step-shear flow behaviors of a concentrated xanthan gum model solution have been experimentally investigated in interrupted shear flow fields with a various combination of different shear rates, shearing times and rest times, and step-incremental and step-reductional shear flow fields with various shearing times. The main findings obtained from this study are summarized as follows. (i) In interrupted shear flow fields, the shear stress is sharply increased until reaching the maximum stress at an initial stage of shearing times, and then a stress decay towards a steady state is observed as the shearing time is increased in both start-up shear flow fields. The shear stress is suddenly decreased immediately after the imposed shear rate is stopped, and then slowly decayed during the period of a rest time. (ii) As an increase in rest time, the difference in the maximum stress values between the two start-up shear flow fields is decreased whereas the shearing time exerts a slight influence on this behavior. (iii) In step-incremental shear flow fields, after passing through the maximum stress, structural destruction causes a stress decay behavior towards a steady state as an increase in shearing time in each step shear flow region. The time needed to reach the maximum stress value is shortened as an increase in step-increased shear rate. (iv) In step-reductional shear flow fields, after passing through the minimum stress, structural recovery induces a stress growth behavior towards an equilibrium state as an increase in shearing time in each step shear flow region. The time needed to reach the minimum stress value is lengthened as a decrease in step-decreased shear rate.

  2. Migration mechanisms of a faceted grain boundary

    NASA Astrophysics Data System (ADS)

    Hadian, R.; Grabowski, B.; Finnis, M. W.; Neugebauer, J.

    2018-04-01

    We report molecular dynamics simulations and their analysis for a mixed tilt and twist grain boundary vicinal to the Σ 7 symmetric tilt boundary of the type {1 2 3 } in aluminum. When minimized in energy at 0 K , a grain boundary of this type exhibits nanofacets that contain kinks. We observe that at higher temperatures of migration simulations, given extended annealing times, it is energetically favorable for these nanofacets to coalesce into a large terrace-facet structure. Therefore, we initiate the simulations from such a structure and study as a function of applied driving force and temperature how the boundary migrates. We find the migration of a faceted boundary can be described in terms of the flow of steps. The migration is dominated at lower driving force by the collective motion of the steps incorporated in the facet, and at higher driving forces by the step detachment from the terrace-facet junction and propagation of steps across the terraces. The velocity of steps on terraces is faster than their velocity when incorporated in the facet, and very much faster than the velocity of the facet profile itself, which is almost stationary. A simple kinetic Monte Carlo model matches the broad kinematic features revealed by the molecular dynamics. Since the mechanisms seem likely to be very general on kinked grain-boundary planes, the step-flow description is a promising approach to more quantitative modeling of general grain boundaries.

  3. Angular measurements of the dynein ring reveal a stepping mechanism dependent on a flexible stalk

    PubMed Central

    Lippert, Lisa G.; Dadosh, Tali; Hadden, Jodi A.; Karnawat, Vishakha; Diroll, Benjamin T.; Murray, Christopher B.; Holzbaur, Erika L. F.; Schulten, Klaus; Reck-Peterson, Samara L.; Goldman, Yale E.

    2017-01-01

    The force-generating mechanism of dynein differs from the force-generating mechanisms of other cytoskeletal motors. To examine the structural dynamics of dynein’s stepping mechanism in real time, we used polarized total internal reflection fluorescence microscopy with nanometer accuracy localization to track the orientation and position of single motors. By measuring the polarized emission of individual quantum nanorods coupled to the dynein ring, we determined the angular position of the ring and found that it rotates relative to the microtubule (MT) while walking. Surprisingly, the observed rotations were small, averaging only 8.3°, and were only weakly correlated with steps. Measurements at two independent labeling positions on opposite sides of the ring showed similar small rotations. Our results are inconsistent with a classic power-stroke mechanism, and instead support a flexible stalk model in which interhead strain rotates the rings through bending and hinging of the stalk. Mechanical compliances of the stalk and hinge determined based on a 3.3-μs molecular dynamics simulation account for the degree of ring rotation observed experimentally. Together, these observations demonstrate that the stepping mechanism of dynein is fundamentally different from the stepping mechanisms of other well-studied MT motors, because it is characterized by constant small-scale fluctuations of a large but flexible structure fully consistent with the variable stepping pattern observed as dynein moves along the MT. PMID:28533393

  4. Gemi: PCR Primers Prediction from Multiple Alignments

    PubMed Central

    Sobhy, Haitham; Colson, Philippe

    2012-01-01

    Designing primers and probes for polymerase chain reaction (PCR) is a preliminary and critical step that requires the identification of highly conserved regions in a given set of sequences. This task can be challenging if the targeted sequences display a high level of diversity, as frequently encountered in microbiologic studies. We developed Gemi, an automated, fast, and easy-to-use bioinformatics tool with a user-friendly interface to design primers and probes based on multiple aligned sequences. This tool can be used for the purpose of real-time and conventional PCR and can deal efficiently with large sets of sequences of a large size. PMID:23316117

  5. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach

    PubMed Central

    Tian, Yuan; Guan, Tao; Wang, Cheng

    2010-01-01

    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278

  6. A Semi-implicit Method for Resolution of Acoustic Waves in Low Mach Number Flows

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2002-09-01

    A semi-implicit numerical method for time accurate simulation of compressible flow is presented. By extending the low Mach number pressure correction method, a Helmholtz equation for pressure is obtained in the case of compressible flow. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity, resulting in significant efficiency gains. Use of a discretization that is centered in both time and space results in zero artificial damping of acoustic waves. The method is attractive for problems in which Mach numbers are low, and the acoustic waves of most interest are those having low frequency, such as acoustic combustion instabilities. Both of these characteristics suggest the use of time steps larger than those allowable by an acoustic CFL limitation. In some cases it may be desirable to include a small amount of numerical dissipation to eliminate oscillations due to small-wavelength, high-frequency, acoustic modes, which are not of interest; therefore, a provision for doing this in a controlled manner is included in the method. Results of the method for several model problems are presented, and the performance of the method in a large eddy simulation is examined.

  7. An approximation method for improving dynamic network model fitting.

    PubMed

    Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M

    There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.

  8. Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora

    NASA Astrophysics Data System (ADS)

    Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke

    The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.

  9. Accuracy of an unstructured-grid upwind-Euler algorithm for the ONERA M6 wing

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1991-01-01

    Improved algorithms for the solution of the three-dimensional, time-dependent Euler equations are presented for aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured-grid flow solvers. The spatial discretization involves a flux-split approach that is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves either an explicit time-integration scheme using a multistage Runge-Kutta procedure or an implicit time-integration scheme using a Gauss-Seidel relaxation procedure, which is computationally efficient for either steady or unsteady flow problems. With the implicit Gauss-Seidel procedure, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady flow results are presented for both the NACA 0012 airfoil and the Office National d'Etudes et de Recherches Aerospatiales M6 wing to demonstrate applications of the new Euler solvers. The paper presents a description of the Euler solvers along with results and comparisons that assess the capability.

  10. A finite-volume ELLAM for three-dimensional solute-transport modeling

    USGS Publications Warehouse

    Russell, T.F.; Heberton, C.I.; Konikow, Leonard F.; Hornberger, G.Z.

    2003-01-01

    A three-dimensional finite-volume ELLAM method has been developed, tested, and successfully implemented as part of the U.S. Geological Survey (USGS) MODFLOW-2000 ground water modeling package. It is included as a solver option for the Ground Water Transport process. The FVELLAM uses space-time finite volumes oriented along the streamlines of the flow field to solve an integral form of the solute-transport equation, thus combining local and global mass conservation with the advantages of Eulerian-Lagrangian characteristic methods. The USGS FVELLAM code simulates solute transport in flowing ground water for a single dissolved solute constituent and represents the processes of advective transport, hydrodynamic dispersion, mixing from fluid sources, retardation, and decay. Implicit time discretization of the dispersive and source/sink terms is combined with a Lagrangian treatment of advection, in which forward tracking moves mass to the new time level, distributing mass among destination cells using approximate indicator functions. This allows the use of large transport time increments (large Courant numbers) with accurate results, even for advection-dominated systems (large Peclet numbers). Four test cases, including comparisons with analytical solutions and benchmarking against other numerical codes, are presented that indicate that the FVELLAM can usually yield excellent results, even if relatively few transport time steps are used, although the quality of the results is problem-dependent.

  11. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles; Moin, Parviz

    2002-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustic combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Both of these characteristics suggest the use of larger time steps than those allowed by an acoustic CFL condition. The turbulent combustion model used is the Combined Conserved Scalar/Level Set Flamelet model of Duchamp de Lageneste and Pitsch for partially premixed combustion. Comparison of LES results to the experiments of Besson et al will be presented.

  12. Limit theorems for Lévy walks in d dimensions: rare and bulk fluctuations

    NASA Astrophysics Data System (ADS)

    Fouxon, Itzhak; Denisov, Sergey; Zaburdaev, Vasily; Barkai, Eli

    2017-04-01

    We consider super-diffusive Lévy walks in d≥slant 2 dimensions when the duration of a single step, i.e. a ballistic motion performed by a walker, is governed by a power-law tailed distribution of infinite variance and finite mean. We demonstrate that the probability density function (PDF) of the coordinate of the random walker has two different scaling limits at large times. One limit describes the bulk of the PDF. It is the d-dimensional generalization of the one-dimensional Lévy distribution and is the counterpart of the central limit theorem (CLT) for random walks with finite dispersion. In contrast with the one-dimensional Lévy distribution and the CLT this distribution does not have a universal shape. The PDF reflects anisotropy of the single-step statistics however large the time is. The other scaling limit, the so-called ‘infinite density’, describes the tail of the PDF which determines second (dispersion) and higher moments of the PDF. This limit repeats the angular structure of the PDF of velocity in one step. A typical realization of the walk consists of anomalous diffusive motion (described by anisotropic d-dimensional Lévy distribution) interspersed with long ballistic flights (described by infinite density). The long flights are rare but due to them the coordinate increases so much that their contribution determines the dispersion. We illustrate the concept by considering two types of Lévy walks, with isotropic and anisotropic distributions of velocities. Furthermore, we show that for isotropic but otherwise arbitrary velocity distributions the d-dimensional process can be reduced to a one-dimensional Lévy walk. We briefly discuss the consequences of non-universality for the d  >  1 dimensional fractional diffusion equation, in particular the non-uniqueness of the fractional Laplacian.

  13. Lightning channel current persists between strokes

    NASA Astrophysics Data System (ADS)

    Wendel, JoAnna

    2014-09-01

    The usual cloud-to-ground lightning occurs when a large negative charge contained in a "stepped leader" travels down toward the Earth's surface. It then meets a positive charge that comes up tens of meters from the ground, resulting in a powerful neutralizing explosion that begins the first return stroke of the lightning flash. The entire flash lasts only a few hundred milliseconds, but during that time, multiple subsequent stroke-return stroke sequences usually occur.

  14. An algorithm for solving the perturbed gas dynamic equations

    NASA Technical Reports Server (NTRS)

    Davis, Sanford

    1993-01-01

    The present application of a compact, higher-order central-difference approximation to the linearized Euler equations illustrates the multimodal character of these equations by means of computations for acoustic, vortical, and entropy waves. Such dissipationless central-difference methods are shown to propagate waves exhibiting excellent phase and amplitude resolution on the basis of relatively large time-steps; they can be applied to wave problems governed by systems of first-order partial differential equations.

  15. High-throughput screening of chromatographic separations: IV. Ion-exchange.

    PubMed

    Kelley, Brian D; Switzer, Mary; Bastek, Patrick; Kramarczyk, Jack F; Molnar, Kathleen; Yu, Tianning; Coffman, Jon

    2008-08-01

    Ion-exchange (IEX) chromatography steps are widely applied in protein purification processes because of their high capacity, selectivity, robust operation, and well-understood principles. Optimization of IEX steps typically involves resin screening and selection of the pH and counterion concentrations of the load, wash, and elution steps. Time and material constraints associated with operating laboratory columns often preclude evaluating more than 20-50 conditions during early stages of process development. To overcome this limitation, a high-throughput screening (HTS) system employing a robotic liquid handling system and 96-well filterplates was used to evaluate various operating conditions for IEX steps for monoclonal antibody (mAb) purification. A screening study for an adsorptive cation-exchange step evaluated eight different resins. Sodium chloride concentrations defining the operating boundaries of product binding and elution were established at four different pH levels for each resin. Adsorption isotherms were measured for 24 different pH and salt combinations for a single resin. An anion-exchange flowthrough step was then examined, generating data on mAb adsorption for 48 different combinations of pH and counterion concentration for three different resins. The mAb partition coefficients were calculated and used to estimate the characteristic charge of the resin-protein interaction. Host cell protein and residual Protein A impurity levels were also measured, providing information on selectivity within this operating window. The HTS system shows promise for accelerating process development of IEX steps, enabling rapid acquisition of large datasets addressing the performance of the chromatography step under many different operating conditions. (c) 2008 Wiley Periodicals, Inc.

  16. The relationship between communication scores from the USMLE Step 2 Clinical Skills examination and communication ratings for first-year internal medicine residents.

    PubMed

    Winward, Marcia L; Lipner, Rebecca S; Johnston, Mary M; Cuddy, Monica M; Clauser, Brian E

    2013-05-01

    This study extends available evidence about the relationship between scores on the Step 2 Clinical Skills (CS) component of the United States Medical Licensing Examination and subsequent performance in residency. It focuses on the relationship between Step 2 CS communication and interpersonal skills scores and communication skills ratings that residency directors assign to residents in their first postgraduate year of internal medicine training. It represents the first large-scale evaluation of the extent to which Step 2 CS communication and interpersonal skills scores can be extrapolated to examinee performance in supervised practice. Hierarchical linear modeling techniques were used to examine the relationships among examinee characteristics, residency program characteristics, and residency-director-provided ratings. The sample comprised 6,306 examinees from 238 internal medicine residency programs who completed Step 2 CS for the first time in 2005 and received ratings during their first year of internal medicine residency training. Although the relationship is modest, Step 2 CS communication and interpersonal skills scores predict communication skills ratings for first-year internal medicine residents after accounting for other factors. The results of this study make a reasonable case that Step 2 CS communication and interpersonal skills scores provide useful information for predicting the level of communication skill that examinees will display in their first year of internal medicine residency training. This finding demonstrates some level of extrapolation from the testing context to behavior in supervised practice, thus providing validity-related evidence for using Step 2 CS communication and interpersonal skills scores in high-stakes decisions.

  17. Surface settling in partially filled containers upon step reduction in gravity

    NASA Technical Reports Server (NTRS)

    Weislogel, Marl M.; Ross, Howard D.

    1990-01-01

    A large literature exists concerning the equilibrium configurations of free liquid/gas surfaces in reduced gravity environments. Such conditions generally yield surfaces of constant curvature meeting the container wall at a particular (contact) angle. The time required to reach and stabilize about this configuration is less studied for the case of sudden changes in gravity level, e.g. from normal- to low-gravity, as can occur in many drop tower experiments. The particular interest here was to determine the total reorientation time for such surfaces in cylinders (mainly), as a function primarily of contact angle and kinematic viscosity, in order to aid in the development of drop tower experiment design. A large parametric range of tests were performed and, based on an accompanying scale analysis, the complete data set was correlated. The results of other investigations are included for comparison.

  18. Development and integration of block operations for data invariant automation of digital preprocessing and analysis of biological and biomedical Raman spectra.

    PubMed

    Schulze, H Georg; Turner, Robin F B

    2015-06-01

    High-throughput information extraction from large numbers of Raman spectra is becoming an increasingly taxing problem due to the proliferation of new applications enabled using advances in instrumentation. Fortunately, in many of these applications, the entire process can be automated, yielding reproducibly good results with significant time and cost savings. Information extraction consists of two stages, preprocessing and analysis. We focus here on the preprocessing stage, which typically involves several steps, such as calibration, background subtraction, baseline flattening, artifact removal, smoothing, and so on, before the resulting spectra can be further analyzed. Because the results of some of these steps can affect the performance of subsequent ones, attention must be given to the sequencing of steps, the compatibility of these sequences, and the propensity of each step to generate spectral distortions. We outline here important considerations to effect full automation of Raman spectral preprocessing: what is considered full automation; putative general principles to effect full automation; the proper sequencing of processing and analysis steps; conflicts and circularities arising from sequencing; and the need for, and approaches to, preprocessing quality control. These considerations are discussed and illustrated with biological and biomedical examples reflecting both successful and faulty preprocessing.

  19. Postseismic rebound in fault step-overs caused by pore fluid flow

    USGS Publications Warehouse

    Peltzer, G.; Rosen, P.; Rogez, F.; Hudnut, K.

    1996-01-01

    Near-field strain induced by large crustal earthquakes results in changes in pore fluid pressure that dissipate with time and produce surface deformation. Synthetic aperture radar (SAR) interferometry revealed several centimeters of postseismic uplift in pull-apart structures and subsidence in a compressive jog along the Landers, California, 1992 earthquake surface rupture, with a relaxation time of 270 ?? 45 days. Such a postseismic rebound may be explained by the transition of the Poisson's ratio of the deformed volumes of rock from undrained to drained conditions as pore fluid flow allows pore pressure to return to hydrostatic equilibrium.

  20. Cosmetic surgery in times of recession: macroeconomics for plastic surgeons.

    PubMed

    Krieger, Lloyd M

    2002-10-01

    Periods of economic downturn place special demands on the plastic surgeon whose practice involves a large amount of cosmetic surgery. When determining strategy during difficult economic times, it is useful to understand the macroeconomic background of these downturns and to draw lessons from businesses in other service industries. Business cycles and monetary policy determine the overall environment in which plastic surgery is practiced. Plastic surgeons can take both defensive and proactive steps to maintain their profits during recessions and to prepare for the inevitable upturn. Care should also be taken when selecting pricing strategy during economic slowdowns.

  1. The role of particle jamming on the formation and stability of step-pool morphology: insight from a reduced-complexity model

    NASA Astrophysics Data System (ADS)

    Saletti, M.; Molnar, P.; Hassan, M. A.

    2017-12-01

    Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and internal geomorphic adjustment (e.g. jamming) on the response of step-pool streams, showing the potential of reduced-complexity models in fluvial geomorphology.

  2. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    PubMed Central

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  3. A Novel Porcine Model for Future Studies of Cell-enriched Fat Grafting

    PubMed Central

    Sørensen, Celine L.; Vester-Glowinski, Peter V.; Herly, Mikkel; Kurbegovic, Sorel; Ørholt, Mathias; Svalgaard, Jesper D.; Kølle, Stig-Frederik T.; Kristensen, Annemarie T.; Talman, Maj-Lis M.; Drzewiecki, Krzysztof T.; Fischer-Nielsen, Anne

    2018-01-01

    Background: Cell-enriched fat grafting has shown promising results for improving graft survival, although many questions remain unanswered. A large animal model is crucial for bridging the gap between rodent studies and human trials. We present a step-by-step approach in using the Göttingen minipig as a model for future studies of cell-enriched large volume fat grafting. Methods: Fat grafting was performed as bolus injections and structural fat grafting. Graft retention was assessed by magnetic resonance imaging after 120 days. The stromal vascular fraction (SVF) was isolated from excised fat and liposuctioned fat from different anatomical sites and analyzed. Porcine adipose-derived stem/stromal cells (ASCs) were cultured in different growth supplements, and population doubling time, maximum cell yield, expression of surface markers, and differentiation potential were investigated. Results: Structural fat grafting in the breast and subcutaneous bolus grafting in the abdomen revealed average graft retention of 53.55% and 15.28%, respectively, which are similar to human reports. Liposuction yielded fewer SVF cells than fat excision, and abdominal fat had the most SVF cells/g fat with SVF yields similar to humans. Additionally, we demonstrated that porcine ASCs can be readily isolated and expanded in culture in allogeneic porcine platelet lysate and fetal bovine serum and that the use of 10% porcine platelet lysate or 20% fetal bovine serum resulted in population doubling time, maximum cell yield, surface marker profile, and trilineage differentiation that were comparable with humans. Conclusions: The Göttingen minipig is a feasible and cost-effective, large animal model for future translational studies of cell-enriched fat grafting. PMID:29876178

  4. Dwell-time algorithm for polishing large optics.

    PubMed

    Wang, Chunjin; Yang, Wei; Wang, Zhenzhong; Yang, Xu; Hu, Chenlin; Zhong, Bo; Guo, Yinbiao; Xu, Qiao

    2014-07-20

    The calculation of the dwell time plays a crucial role in polishing precision large optics. Although some studies have taken place, it remains a challenge to develop a calculation algorithm which is absolutely stable, together with a high convergence ratio and fast solution speed even for extremely large mirrors. For this aim, we introduced a self-adaptive iterative algorithm to calculate the dwell time in this paper. Simulations were conducted in bonnet polishing (BP) to test the performance of this method on a real 430  mm × 430  mm fused silica part with the initial surface error PV=1741.29  nm, RMS=433.204  nm. The final surface residual error in the clear aperture after two simulation steps turned out to be PV=11.7  nm, RMS=0.5  nm. The results confirm that this method is stable and has a high convergence ratio and fast solution speed even with an ordinary computer. It is notable that the solution time is usually just a few seconds even on a 1000  mm × 1000  mm part. Hence, we believe that this method is perfectly suitable for polishing large optics. And not only can it be applied to BP, but it can also be applied to other subaperture deterministic polishing processes.

  5. Are randomly grown graphs really random?

    PubMed

    Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H

    2001-10-01

    We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.

  6. Full allogeneic fusion of embryos in a holothuroid echinoderm.

    PubMed

    Gianasi, Bruno L; Hamel, Jean-François; Mercier, Annie

    2018-05-30

    Whole-body chimaeras (organisms composed of genetically distinct cells) have been directly observed in modular/colonial organisms (e.g. corals, sponges, ascidians); whereas in unitary deuterostosmes (including mammals) they have only been detected indirectly through molecular analysis. Here, we document for the first time the step-by-step development of whole-body chimaeras in the holothuroid Cucumaria frondosa , a unitary deuterostome belonging to the phylum Echinodermata. To the best of our knowledge, this is the most derived unitary metazoan in which direct investigation of zygote fusibility has been undertaken. Fusion occurred among hatched blastulae, never during earlier (unhatched) or later (larval) stages. The fully fused chimaeric propagules were two to five times larger than non-chimaeric embryos. Fusion was positively correlated with propagule density and facilitated by the natural tendency of early embryos to agglomerate. The discovery of natural chimaerism in a unitary deuterostome that possesses large externally fertilized eggs provides a framework to explore key aspects of evolutionary biology, histocompatibility and cell transplantation in biomedical research. © 2018 The Author(s).

  7. Pushing particles in extreme fields

    NASA Astrophysics Data System (ADS)

    Gordon, Daniel F.; Hafizi, Bahman; Palastro, John

    2017-03-01

    The update of the particle momentum in an electromagnetic simulation typically employs the Boris scheme, which has the advantage that the magnetic field strictly performs no work on the particle. In an extreme field, however, it is found that onerously small time steps are required to maintain accuracy. One reason for this is that the operator splitting scheme fails. In particular, even if the electric field impulse and magnetic field rotation are computed exactly, a large error remains. The problem can be analyzed for the case of constant, but arbitrarily polarized and independent electric and magnetic fields. The error can be expressed in terms of exponentials of nested commutators of the generators of boosts and rotations. To second order in the field, the Boris scheme causes the error to vanish, but to third order in the field, there is an error that has to be controlled by decreasing the time step. This paper introduces a scheme that avoids this problem entirely, while respecting the property that magnetic fields cannot change the particle energy.

  8. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  9. Alternative Attitude Commanding and Control for Precise Spacecraft Landing

    NASA Technical Reports Server (NTRS)

    Singh, Gurkirpal

    2004-01-01

    A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.

  10. Sensitivity of The High-resolution Wam Model With Respect To Time Step

    NASA Astrophysics Data System (ADS)

    Kasemets, K.; Soomere, T.

    The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave heights over the whole Baltic Sea was 2.4 m (1 minute) and 2.04 m (3 minutes), respectively. The most probable reason of such difference is that the WAM model with a relatively large time step poorly describes wave field evolution in the Aland area with extremely ragged bottom topography and coastal line. In earlier studies, it has been reported that the WAM model frequently underestimates wave heights in the northern Baltic Proper by 20-30% in the case of strong north storms (Tuomi et al, Report series of the Finnish Institute of Marine Re- search, 1999). The described results suggest that a part of this underestimation may be removed through a proper choice of the time step.

  11. Promise and problems in using stress triggering models for time-dependent earthquake hazard assessment

    NASA Astrophysics Data System (ADS)

    Cocco, M.

    2001-12-01

    Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to the stressing history perturbing the faults (such as dynamic stress changes, post-seismic stress changes caused by viscolelastic relaxation or fluid flow). If, for instance, we believe that dynamic stress changes can trigger aftershocks or earthquakes years after the passing of the seismic waves through the fault, the perspective of calculating interaction probability is untenable. It is therefore clear we have learned a lot on earthquake interaction incorporating fault constitutive properties, allowing to solve existing controversy, but leaving open questions for future research.

  12. Temporal interpolation alters motion in fMRI scans: Magnitudes and consequences for artifact detection.

    PubMed

    Power, Jonathan D; Plitt, Mark; Kundu, Prantik; Bandettini, Peter A; Martin, Alex

    2017-01-01

    Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10-50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion).

  13. "We'll call you when the results are in": Preferences for how medical test results are delivered.

    PubMed

    Dooley, Michael D; Burreal, Shay; Sweeny, Kate

    2017-02-01

    Whether healthy or sick, adults undergo frequent medical testing; however, no guidelines currently exist as to how patients are informed of their medical test results. This short report provides an initial look at how healthcare professionals deliver medical test results and patient preferences regarding these procedures. We specifically focus on two options for delivery of results: (1) open-ended timing, in which patients are contacted without warning when test results become available; or (2) closed-ended timing, in which patients are provided with a specific day and time when they will learn their test results. Participants who underwent a recent medical test indicated which delivery method their healthcare professional provided and their preferred method. Findings demonstrate a large discrepancy between actual and preferred timing, stemming from a general trend towards providing open-ended timing, whereas patient preferences were evenly split between the two options. This study provides a first step in understanding the merits of two options for delivering medical test results to patients and suggests an opportunity to improve patient care. The findings from this study provide first steps toward the development of guidelines for delivering test results in ways that maximize the quality of patient care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Individual-based modelling of population growth and diffusion in discrete time.

    PubMed

    Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone

    2017-01-01

    Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.

  15. Temporal interpolation alters motion in fMRI scans: Magnitudes and consequences for artifact detection

    PubMed Central

    Plitt, Mark; Kundu, Prantik; Bandettini, Peter A.; Martin, Alex

    2017-01-01

    Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10–50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion). PMID:28880888

  16. Numerical Issues Associated with Compensating and Competing Processes in Climate Models: an Example from ECHAM-HAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Hui; Rasch, Philip J.; Zhang, Kai

    2013-06-26

    The purpose of this paper is to draw attention to the need for appropriate numerical techniques to represent process interactions in climate models. In two versions of the ECHAM-HAM model, different time integration methods are used to solve the sulfuric acid (H2SO4) gas evolution equation, which lead to substantially different results in the H2SO4 gas concentration and the aerosol nucleation rate. Using convergence tests and sensitivity simulations performed with various time stepping schemes, it is confirmed that numerical errors in the second model version are significantly smaller than those in version one. The use of sequential operator splitting in combinationmore » with long time step is identified as the main reason for the large systematic biases in the old model. The remaining errors in version two in the nucleation rate, related to the competition between condensation and nucleation, have a clear impact on the simulated concentration of cloud condensation nuclei in the lower troposphere. These errors can be significantly reduced by employing an implicit solver that handles production, condensation and nucleation at the same time. Lessons learned in this work underline the need for more caution when treating multi-time-scale problems involving compensating and competing processes, a common occurrence in current climate models.« less

  17. Optimal Padding for the Two-Dimensional Fast Fourier Transform

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.

    2011-01-01

    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.

  18. Disordered Nanohole Patterns in Metal-Insulator Multilayer for Ultra-broadband Light Absorption: Atomic Layer Deposition for Lithography Free Highly repeatable Large Scale Multilayer Growth.

    PubMed

    Ghobadi, Amir; Hajian, Hodjat; Dereshgi, Sina Abedini; Bozok, Berkay; Butun, Bayram; Ozbay, Ekmel

    2017-11-08

    In this paper, we demonstrate a facile, lithography free, and large scale compatible fabrication route to synthesize an ultra-broadband wide angle perfect absorber based on metal-insulator-metal-insulator (MIMI) stack design. We first conduct a simulation and theoretical modeling approach to study the impact of different geometries in overall stack absorption. Then, a Pt-Al 2 O 3 multilayer is fabricated using a single atomic layer deposition (ALD) step that offers high repeatability and simplicity in the fabrication step. In the best case, we get an absorption bandwidth (BW) of 600 nm covering a range of 400 nm-1000 nm. A substantial improvement in the absorption BW is attained by incorporating a plasmonic design into the middle Pt layer. Our characterization results demonstrate that the best configuration can have absorption over 0.9 covering a wavelength span of 400 nm-1490 nm with a BW that is 1.8 times broader compared to that of planar design. On the other side, the proposed structure retains its absorption high at angles as wide as 70°. The results presented here can serve as a beacon for future performance enhanced multilayer designs where a simple fabrication step can boost the overall device response without changing its overall thickness and fabrication simplicity.

  19. Time domain simulation of the response of geometrically nonlinear panels subjected to random loading

    NASA Technical Reports Server (NTRS)

    Moyer, E. Thomas, Jr.

    1988-01-01

    The response of composite panels subjected to random pressure loads large enough to cause geometrically nonlinear responses is studied. A time domain simulation is employed to solve the equations of motion. An adaptive time stepping algorithm is employed to minimize intermittent transients. A modified algorithm for the prediction of response spectral density is presented which predicts smooth spectral peaks for discrete time histories. Results are presented for a number of input pressure levels and damping coefficients. Response distributions are calculated and compared with the analytical solution of the Fokker-Planck equations. RMS response is reported as a function of input pressure level and damping coefficient. Spectral densities are calculated for a number of examples.

  20. Real-Time Aerodynamic Flow and Data Visualization in an Interactive Virtual Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2005-01-01

    Significant advances have been made to non-intrusive flow field diagnostics in the past decade. Camera based techniques are now capable of determining physical qualities such as surface deformation, surface pressure and temperature, flow velocities, and molecular species concentration. In each case, extracting the pertinent information from the large volume of acquired data requires powerful and efficient data visualization tools. The additional requirement for real time visualization is fueled by an increased emphasis on minimizing test time in expensive facilities. This paper will address a capability titled LiveView3D, which is the first step in the development phase of an in depth, real time data visualization and analysis tool for use in aerospace testing facilities.

  1. AlInAsSb/GaSb staircase avalanche photodiode

    NASA Astrophysics Data System (ADS)

    Ren, Min; Maddox, Scott; Chen, Yaojia; Woodson, Madison; Campbell, Joe C.; Bank, Seth

    2016-02-01

    Over 30 years ago, Capasso and co-workers [IEEE Trans. Electron Devices 30, 381 (1982)] proposed the staircase avalanche photodetector (APD) as a solid-state analog of the photomultiplier tube. In this structure, electron multiplication occurs deterministically at steps in the conduction band profile, which function as the dynodes of a photomultiplier tube, leading to low excess multiplication noise. Unlike traditional APDs, the origin of staircase gain is band engineering rather than large applied electric fields. Unfortunately, the materials available at the time, principally AlxGa1-xAs/GaAs, did not offer sufficiently large conduction band offsets and energy separations between the direct and indirect valleys to realize the full potential of the staircase gain mechanism. Here, we report a true staircase APD operation using alloys of a rather underexplored material, AlxIn1-xAsySb1-y, lattice-matched to GaSb. Single step "staircase" devices exhibited a constant gain of ˜2×, over a broad range of applied bias, operating temperature, and excitation wavelengths/intensities, consistent with Monte Carlo calculations.

  2. Single-Step Seeded-Growth of Graphene Nanoribbons (GNRs) via Plasma-Enhanced Chemical Vapor Deposition (PECVD)

    NASA Astrophysics Data System (ADS)

    Hsu, C.-C.; Yang, K.; Tseng, W.-S.; Li, Yiliang; Li, Yilun; Tour, J. M.; Yeh, N.-C.

    One of the main challenges in the fabrication of GNRs is achieving large-scale low-cost production with high quality. Current techniques, including lithography and unzipped carbon nanotubes, are not suitable for mass production. We have recently developed a single-step PECVD growth process of high-quality graphene sheets without any active heating. By adding some substituted aromatic as seeding molecules, we are able to rapidly grow GNRs vertically on various transition-metal substrates. The morphology and electrical properties of the GNRs are dependent on the growth parameters such as the growth time, gas flow and species of the seeding molecules. On the other hand, all GNRs exhibit strong infrared and optical absorption. From studies of the Raman spectra, scanning electron microscopic images, and x-ray/ultraviolet photoelectron spectra of these GNRs as functions of the growth parameters, we propose a model for the growth mechanism. Our findings suggest that our approach opens up a pathway to large-scale, inexpensive production of GNRs for applications to supercapacitors and solar cells. This work was supported by the Grubstake Award and NSF through IQIM at Caltech.

  3. Solid-Phase Extraction (SPE): Principles and Applications in Food Samples.

    PubMed

    Ötles, Semih; Kartal, Canan

    2016-01-01

    Solid-Phase Extraction (SPE) is a sample preparation method that is practised on numerous application fields due to its many advantages compared to other traditional methods. SPE was invented as an alternative to liquid/liquid extraction and eliminated multiple disadvantages, such as usage of large amount of solvent, extended operation time/procedure steps, potential sources of error, and high cost. Moreover, SPE can be plied to the samples combined with other analytical methods and sample preparation techniques optionally. SPE technique is a useful tool for many purposes through its versatility. Isolation, concentration, purification and clean-up are the main approaches in the practices of this method. Food structures represent a complicated matrix and can be formed into different physical stages, such as solid, viscous or liquid. Therefore, sample preparation step particularly has an important role for the determination of specific compounds in foods. SPE offers many opportunities not only for analysis of a large diversity of food samples but also for optimization and advances. This review aims to provide a comprehensive overview on basic principles of SPE and its applications for many analytes in food matrix.

  4. Real-Time Imaging System for the OpenPET

    NASA Astrophysics Data System (ADS)

    Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga

    2012-02-01

    The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.

  5. Progressive mechanical indentation of large-format Li-ion cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsin; Kumar, Abhishek; Simunovic, Srdjan

    We used large format Li-ion cells to study the mechanical responses of single cells of thickness 6.5 mm and stacks of three cells under compressive loading. We carried out various sequences of increasing depth indentations using a 1.0 inch (25.4 mm) diameter steel ball with steel plate as a rigid support surface. The indentation depths were between 0.025 and 0.250 with main indentation increments tests of 0.025 steps. Increment steps of 0.100 and 0.005 were used to pinpoint the onset of internal-short that occurred between 0.245 and 0.250 . The indented cells were disassembled and inspected for internal damage. Loadmore » vs. time curves were compared with the developed computer models. Separator thinning leading to the short circuit was simulated using both isotropic and anisotropic mechanical properties. This study show that separators behave differently when tested as a single layer vs. a stack in a typical pouch cell. The collective responses of the multiple layers must be taken into account in failure analysis. A model that resolves the details of the individual internal cell components was able to simulate the internal deformation of the large format cells and the onset of failure assumed to coincide with the onset of internal short circuit.« less

  6. Large-area one-step assembly of three-dimensional porous metal micro/nanocages by ethanol-assisted femtosecond laser irradiation for enhanced antireflection and hydrophobicity.

    PubMed

    Li, Guoqiang; Li, Jiawen; Zhang, Chenchu; Hu, Yanlei; Li, Xiaohong; Chu, Jiaru; Huang, Wenhao; Wu, Dong

    2015-01-14

    The capability to realize 2D-3D controllable metallic micro/nanostructures is of key importance for various fields such as plasmonics, electronics, bioscience, and chemistry due to unique properties such as electromagnetic field enhancement, catalysis, photoemission, and conductivity. However, most of the present techniques are limited to low-dimension (1D-2D), small area, or single function. Here we report the assembly of self-organized three-dimensional (3D) porous metal micro/nanocages arrays on nickel surface by ethanol-assisted femtosecond laser irradiation. The underlying formation mechanism was investigated by a series of femtosecond laser irradiation under exposure time from 5 to 30 ms. We also demonstrate the ability to control the size of micro/nanocage arrays from 0.8 to 2 μm by different laser pulse energy. This method features rapidness (∼10 min), simplicity (one-step process), and ease of large-area (4 cm(2) or more) fabrication. The 3D cagelike micro/nanostructures exhibit not only improved antireflection from 80% to 7% but also enhanced hydrophobicity from 98.5° to 142° without surface modification. This simple technique for 3D large-area controllable metal microstructures will find great potential applications in optoelectronics, physics, and chemistry.

  7. A computational study of the use of an optimization-based method for simulating large multibody systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petra, C.; Gavrea, B.; Anitescu, M.

    2009-01-01

    The present work aims at comparing the performance of several quadratic programming (QP) solvers for simulating large-scale frictional rigid-body systems. Traditional time-stepping schemes for simulation of multibody systems are formulated as linear complementarity problems (LCPs) with copositive matrices. Such LCPs are generally solved by means of Lemke-type algorithms and solvers such as the PATH solver proved to be robust. However, for large systems, the PATH solver or any other pivotal algorithm becomes unpractical from a computational point of view. The convex relaxation proposed by one of the authors allows the formulation of the integration step as a QPD, for whichmore » a wide variety of state-of-the-art solvers are available. In what follows we report the results obtained solving that subproblem when using the QP solvers MOSEK, OOQP, TRON, and BLMVM. OOQP is presented with both the symmetric indefinite solver MA27 and our Cholesky reformulation using the CHOLMOD package. We investigate computational performance and address the correctness of the results from a modeling point of view. We conclude that the OOQP solver, particularly with the CHOLMOD linear algebra solver, has predictable performance and memory use patterns and is far more competitive for these problems than are the other solvers.« less

  8. a Novel Approach of Indexing and Retrieving Spatial Polygons for Efficient Spatial Region Queries

    NASA Astrophysics Data System (ADS)

    Zhao, J. H.; Wang, X. Z.; Wang, F. Y.; Shen, Z. H.; Zhou, Y. C.; Wang, Y. L.

    2017-10-01

    Spatial region queries are more and more widely used in web-based applications. Mechanisms to provide efficient query processing over geospatial data are essential. However, due to the massive geospatial data volume, heavy geometric computation, and high access concurrency, it is difficult to get response in real time. Spatial indexes are usually used in this situation. In this paper, based on k-d tree, we introduce a distributed KD-Tree (DKD-Tree) suitbable for polygon data, and a two-step query algorithm. The spatial index construction is recursive and iterative, and the query is an in memory process. Both the index and query methods can be processed in parallel, and are implemented based on HDFS, Spark and Redis. Experiments on a large volume of Remote Sensing images metadata have been carried out, and the advantages of our method are investigated by comparing with spatial region queries executed on PostgreSQL and PostGIS. Results show that our approach not only greatly improves the efficiency of spatial region query, but also has good scalability, Moreover, the two-step spatial range query algorithm can also save cluster resources to support a large number of concurrent queries. Therefore, this method is very useful when building large geographic information systems.

  9. Progressive mechanical indentation of large-format Li-ion cells

    NASA Astrophysics Data System (ADS)

    Wang, Hsin; Kumar, Abhishek; Simunovic, Srdjan; Allu, Srikanth; Kalnaus, Sergiy; Turner, John A.; Helmers, Jacob C.; Rules, Evan T.; Winchester, Clinton S.; Gorney, Philip

    2017-02-01

    Large format Li-ion cells were used to study the mechanical responses of single cells of thickness 6.5 mm and stacks of three cells under compressive loading. Various sequences of increasing depth indentations were carried out using a 1.0 inch (25.4 mm) diameter steel ball with steel plate as a rigid support surface. The indentation depths were between 0.025″ and 0.250″ with main indentation increments tests of 0.025″ steps. Increment steps of 0.100″ and 0.005″ were used to pinpoint the onset of internal-short that occurred between 0.245″ and 0.250″. The indented cells were disassembled and inspected for internal damage. Load vs. time curves were compared with the developed computer models. Separator thinning leading to the short circuit was simulated using both isotropic and anisotropic mechanical properties. Our study show that separators behave differently when tested as a single layer vs. a stack in a typical pouch cell. The collective responses of the multiple layers must be taken into account in failure analysis. A model that resolves the details of the individual internal cell components was able to simulate the internal deformation of the large format cells and the onset of failure assumed to coincide with the onset of internal short circuit.

  10. Progressive mechanical indentation of large-format Li-ion cells

    DOE PAGES

    Wang, Hsin; Kumar, Abhishek; Simunovic, Srdjan; ...

    2016-12-07

    We used large format Li-ion cells to study the mechanical responses of single cells of thickness 6.5 mm and stacks of three cells under compressive loading. We carried out various sequences of increasing depth indentations using a 1.0 inch (25.4 mm) diameter steel ball with steel plate as a rigid support surface. The indentation depths were between 0.025 and 0.250 with main indentation increments tests of 0.025 steps. Increment steps of 0.100 and 0.005 were used to pinpoint the onset of internal-short that occurred between 0.245 and 0.250 . The indented cells were disassembled and inspected for internal damage. Loadmore » vs. time curves were compared with the developed computer models. Separator thinning leading to the short circuit was simulated using both isotropic and anisotropic mechanical properties. This study show that separators behave differently when tested as a single layer vs. a stack in a typical pouch cell. The collective responses of the multiple layers must be taken into account in failure analysis. A model that resolves the details of the individual internal cell components was able to simulate the internal deformation of the large format cells and the onset of failure assumed to coincide with the onset of internal short circuit.« less

  11. A Social Network Analysis of 140 Community‐Academic Partnerships for Health: Examining the Healthier Wisconsin Partnership Program

    PubMed Central

    Ahmed, Syed M.; Maurana, Cheryl A.; DeFino, Mia C.; Brewer, Devon D.

    2015-01-01

    Abstract Introduction: Social Network Analysis (SNA) provides an important, underutilized approach to evaluating Community Academic Partnerships for Health (CAPHs). This study examines administrative data from 140 CAPHs funded by the Healthier Wisconsin Partnership Program (HWPP). Methods: Funder data was normalized to maximize number of interconnections between funded projects and 318 non‐redundant community partner organizations in a dual mode analysis, examining the period from 2003–2013.Two strategic planning periods, 2003–2008 vs. 2009–2014, allowed temporal comparison. Results: Connectivity of the network was largely unchanged over time, with most projects and partner organizations connected to a single large component in both time periods. Inter‐partner ties formed in HWPP projects were transient. Most community partners were only involved in projects during one strategic time period. Community organizations participating in both time periods were involved in significantly more projects during the first time period than partners participating in the first time period only (Cohen's d = 0.93). Discussion: This approach represents a significant step toward using objective (non‐survey) data for large clusters of health partnerships and has implications for translational science in community settings. Considerations for government, funders, and communities are offered. Examining partnerships within health priority areas, orphaned projects, and faculty ties to these networks are areas for future research. PMID:25974413

  12. Feasibility study of consolidation by direct compaction and friction stir processing of commercially pure titanium powder

    NASA Astrophysics Data System (ADS)

    Nichols, Leannah M.

    Commercially pure titanium can take up to six months to successfully manufacture a six-inch in diameter ingot in which can be shipped to be melted and shaped into other useful components. The applications to the corrosion-resistant, light weight, strong metal are endless, yet so is the manufacturing processing time. At a cost of around $80 per pound of certain grades of titanium powder, the everyday consumer cannot afford to use titanium in the many ways it is beneficial simply because the number of processing steps it takes to manufacture consumes too much time, energy, and labor. In this research, the steps it takes from the raw powder form to the final part are proposed to be reduced from 4-8 steps to only 2 steps utilizing a new technology that may even improve upon the titanium properties at the same time as it is reducing the number of steps of manufacture. The two-step procedure involves selecting a cylindrical or rectangular die and punch to compress a small amount of commercially pure titanium to a strong-enough compact for transportation to the friction stir welder to be consolidated. Friction stir welding invented in 1991 in the United Kingdom uses a tool, similar to a drill bit, to approach a sample and gradually plunge into the material at a certain rotation rate of between 100 to 2,100 RPM. In the second step, the friction stir welder is used to process the titanium powder held in a tight holder to consolidate into a harder titanium form. The resulting samples are cut to expose the cross section and then grinded, polished, and cleaned to be observed and tested using scanning electron microscopy (SEM), electron dispersive spectroscopy (EDS), and a Vickers microhardness tester. The results were that the thicker the sample, the harder the resulting consolidated sample peaking at 2 to 3 times harder than that of the original commercially pure titanium in solid form at a peak value of 435.9 hardness and overall average of 251.13 hardness. The combined results of the SEM and EDS have shown that the mixing of the sample holder material, titanium, and tool material were not of a large amount and therefore proves the feasibility of this study. This study should be continued to lessen the labor, energy, and cost of the production of titanium to therefore allow titanium to be improved upon and be more efficient for many applications across many industries.

  13. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  14. Attitude-Independent Magnetometer Calibration for Spin-Stabilized Spacecraft

    NASA Technical Reports Server (NTRS)

    Natanson, Gregory

    2005-01-01

    The paper describes a three-step estimator to calibrate a Three-Axis Magnetometer (TAM) using TAM and slit Sun or star sensor measurements. In the first step, the Calibration Utility forms a loss function from the residuals of the magnitude of the geomagnetic field. This loss function is minimized with respect to biases, scale factors, and nonorthogonality corrections. The second step minimizes residuals of the projection of the geomagnetic field onto the spin axis under the assumption that spacecraft nutation has been suppressed by a nutation damper. Minimization is done with respect to various directions of the body spin axis in the TAM frame. The direction of the spin axis in the inertial coordinate system required for the residual computation is assumed to be unchanged with time. It is either determined independently using other sensors or included in the estimation parameters. In both cases all estimation parameters can be found using simple analytical formulas derived in the paper. The last step is to minimize a third loss function formed by residuals of the dot product between the geomagnetic field and Sun or star vector with respect to the misalignment angle about the body spin axis. The method is illustrated by calibrating TAM for the Fast Auroral Snapshot Explorer (FAST) using in-flight TAM and Sun sensor data. The estimated parameters include magnetic biases, scale factors, and misalignment angles of the spin axis in the TAM frame. Estimation of the misalignment angle about the spin axis was inconclusive since (at least for the selected time interval) the Sun vector was about 15 degrees from the direction of the spin axis; as a result residuals of the dot product between the geomagnetic field and Sun vectors were to a large extent minimized as a by-product of the second step.

  15. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions

    PubMed Central

    Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner

    2016-01-01

    In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. PMID:27983669

  16. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions.

    PubMed

    Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner

    2016-12-15

    In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.

  17. Controlled cell-seeding methodologies: a first step toward clinically relevant bone tissue engineering strategies.

    PubMed

    Impens, Saartje; Chen, Yantian; Mullens, Steven; Luyten, Frank; Schrooten, Jan

    2010-12-01

    The repair of large and complex bone defects could be helped by a cell-based bone tissue engineering strategy. A reliable and consistent cell-seeding methodology is a mandatory step in bringing bone tissue engineering into the clinic. However, optimization of the cell-seeding step is only relevant when it can be reliably evaluated. The cell seeding efficiency (CSE) plays a fundamental role herein. Results showed that cell lysis and the definition used to determine the CSE played a key role in quantifying the CSE. The definition of CSE should therefore be consistent and unambiguous. The study of the influence of five drop-seeding-related parameters within the studied test conditions showed that (i) the cell density and (ii) the seeding vessel did not significantly affect the CSE, whereas (iii) the volume of seeding medium-to-free scaffold volume ratio (MFR), (iv) the seeding time, and (v) the scaffold morphology did. Prolonging the incubation time increased the CSE up to a plateau value at 4 h. Increasing the MFR or permeability by changing the morphology of the scaffolds significantly reduced the CSE. These results confirm that cell seeding optimization is needed and that an evidence-based selection of the seeding conditions is favored.

  18. Microwave pyrolysis using self-generated pyrolysis gas as activating agent: An innovative single-step approach to convert waste palm shell into activated carbon

    NASA Astrophysics Data System (ADS)

    Yek, Peter Nai Yuh; Keey Liew, Rock; Shahril Osman, Mohammad; Chung Wong, Chee; Lam, Su Shiung

    2017-11-01

    Waste palm shell (WPS) is a biomass residue largely available from palm oil industries. An innovative microwave pyrolysis method was developed to produce biochar from WPS while the pyrolysis gas generated as another product is simultaneously used as activating agent to transform the biochar into waste palm shell activated carbon (WPSAC), thus allowing carbonization and activation to be performed simultaneously in a single-step approach. The pyrolysis method was investigated over a range of process temperature and feedstock amount with emphasis on the yield and composition of the WPSAC obtained. The WPSAC was tested as dye adsorbent in removing methylene blue. This pyrolysis approach provided a fast heating rate (37.5°/min) and short process time (20 min) in transforming WPS into WPSAC, recording a product yield of 40 wt%. The WPSAC was detected with high BET surface area (≥ 1200 m2/g), low ash content (< 5 wt%), and high pore volume (≥ 0.54 cm3/g), thus recording high adsorption efficiency of 440 mg of dye/g. The desirable process features (fast heating rate, short process time) and the recovery of WPSAC suggest the exceptional promise of the single-step microwave pyrolysis approach to produce high-grade WPSAC from WPS.

  19. The general alcoholics anonymous tools of recovery: the adoption of 12-step practices and beliefs.

    PubMed

    Greenfield, Brenna L; Tonigan, J Scott

    2013-09-01

    Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step work have received minimal attention and even less is known about how step work predicts later substance use. The current study (1) compared endorsements of step work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step work, the General Alcoholics Anonymous Tools of Recovery (GAATOR); (2) evaluated the underlying factor structure of the GAATOR and changes in step work over time; (3) examined changes in the endorsement of step work over time; and (4) investigated how, if at all, 12-step work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising behavioral step work and spiritual step work. Behavioral step work did not change over time, but was predicted by having a sponsor, while Spiritual step work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral step work did not prospectively predict substance use. In contrast, spiritual step work predicted percent days abstinent. Behavioral step work and spiritual step work appear to be conceptually distinct components of step work that have distinct predictors and unique impacts on outcomes. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  20. Investigating children's physical activity and sedentary behavior using ecological momentary assessment with mobile phones.

    PubMed

    Dunton, Genevieve F; Liao, Yue; Intille, Stephen S; Spruijt-Metz, Donna; Pentz, Maryann

    2011-06-01

    The risk of obesity during childhood can be significantly reduced through increased physical activity and decreased sedentary behavior. Recent technological advances have created opportunities for the real-time measurement of these behaviors. Mobile phones are ubiquitous and easy to use, and thus have the capacity to collect data from large numbers of people. The present study tested the feasibility, acceptability, and validity of an electronic ecological momentary assessment (EMA) protocol using electronic surveys administered on the display screen of mobile phones to assess children's physical activity and sedentary behaviors. A total of 121 children (ages 9-13, 51% male, 38% at risk for overweight/overweight) participated in EMA monitoring from Friday afternoon to Monday evening during children's nonschool time, with 3-7 surveys/day. Items assessed current activity (e.g., watching TV/movies, playing video games, active play/sports/exercising). Children simultaneously wore an Actigraph GT2M accelerometer. EMA survey responses were time-matched to total step counts and minutes of moderate-to-vigorous physical activity (MVPA) occurring in the 30 min before each EMA survey prompt. No significant differences between answered and unanswered EMA surveys were found for total steps or MVPA. Step counts and the likelihood of 5+ min of MVPA were significantly higher during EMA-reported physical activity (active play/sports/exercising) vs. sedentary behaviors (reading/computer/homework, watching TV/movies, playing video games, riding in a car) (P < 0.001). Findings generally support the acceptability and validity of a 4-day EMA protocol using mobile phones to measure physical activity and sedentary behavior in children during leisure time.

  1. Two-step chlorination: A new approach to disinfection of a primary sewage effluent.

    PubMed

    Li, Yu; Yang, Mengting; Zhang, Xiangru; Jiang, Jingyi; Liu, Jiaqi; Yau, Cie Fu; Graham, Nigel J D; Li, Xiaoyan

    2017-01-01

    Sewage disinfection aims at inactivating pathogenic microorganisms and preventing the transmission of waterborne diseases. Chlorination is extensively applied for disinfecting sewage effluents. The objective of achieving a disinfection goal and reducing disinfectant consumption and operational costs remains a challenge in sewage treatment. In this study, we have demonstrated that, for the same chlorine dosage, a two-step addition of chlorine (two-step chlorination) was significantly more efficient in disinfecting a primary sewage effluent than a one-step addition of chlorine (one-step chlorination), and shown how the two-step chlorination was optimized with respect to time interval and dosage ratio. Two-step chlorination of the sewage effluent attained its highest disinfection efficiency at a time interval of 19 s and a dosage ratio of 5:1. Compared to one-step chlorination, two-step chlorination enhanced the disinfection efficiency by up to 0.81- or even 1.02-log for two different chlorine doses and contact times. An empirical relationship involving disinfection efficiency, time interval and dosage ratio was obtained by best fitting. Mechanisms (including a higher overall Ct value, an intensive synergistic effect, and a shorter recovery time) were proposed for the higher disinfection efficiency of two-step chlorination in the sewage effluent disinfection. Annual chlorine consumption costs in one-step and two-step chlorination of the primary sewage effluent were estimated. Compared to one-step chlorination, two-step chlorination reduced the cost by up to 16.7%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Reflecting on explanatory ability: A mechanism for detecting gaps in causal knowledge.

    PubMed

    Johnson, Dan R; Murphy, Meredith P; Messer, Riley M

    2016-05-01

    People frequently overestimate their understanding-with a particularly large blind-spot for gaps in their causal knowledge. We introduce a metacognitive approach to reducing overestimation, termed reflecting on explanatory ability (REA), which is briefly thinking about how well one could explain something in a mechanistic, step-by-step, causally connected manner. Nine experiments demonstrated that engaging in REA just before estimating one's understanding substantially reduced overestimation. Moreover, REA reduced overestimation with nearly the same potency as generating full explanations, but did so 20 times faster (although only for high complexity objects). REA substantially reduced overestimation by inducing participants to quickly evaluate an object's inherent causal complexity (Experiments 4-7). REA reduced overestimation by also fostering step-by-step, causally connected processing (Experiments 2 and 3). Alternative explanations for REA's effects were ruled out including a general conservatism account (Experiments 4 and 5) and a covert explanation account (Experiment 8). REA's overestimation-reduction effect generalized beyond objects (Experiments 1-8) to sociopolitical policies (Experiment 9). REA efficiently detects gaps in our causal knowledge with implications for improving self-directed learning, enhancing self-insight into vocational and academic abilities, and even reducing extremist attitudes. (c) 2016 APA, all rights reserved).

  3. Behavioral preference in sequential decision-making and its association with anxiety.

    PubMed

    Zhang, Dandan; Gu, Ruolei

    2018-06-01

    In daily life, people often make consecutive decisions before the ultimate goal is reached (i.e., sequential decision-making). However, this kind of decision-making has been largely overlooked in the literature. The current study investigated whether behavioral preference would change during sequential decisions, and the neural processes underlying the potential changes. For this purpose, we revised the classic balloon analogue risk task and recorded the electroencephalograph (EEG) signals associated with each step of decision-making. Independent component analysis performed on EEG data revealed that four EEG components elicited by periodic feedback in the current step predicted participants' decisions (gamble vs. no gamble) in the next step. In order of time sequence, these components were: bilateral occipital alpha rhythm, bilateral frontal theta rhythm, middle frontal theta rhythm, and bilateral sensorimotor mu rhythm. According to the information flows between these EEG oscillations, we proposed a brain model that describes the temporal dynamics of sequential decision-making. Finally, we found that the tendency to gamble (as well as the power intensity of bilateral frontal theta rhythms) was sensitive to the individual level of trait anxiety in certain steps, which may help understand the role of emotion in decision-making. © 2018 Wiley Periodicals, Inc.

  4. The REAL process--a process for recycling sludge from water works.

    PubMed

    Stendahl, K; Färm, C; Fritzdorf, H

    2006-01-01

    In order to produce drinking water, coagulants--such as aluminium salts--are widely used for precipitation and separation of impurities from raw water. The residual from the process is sludge, which presents a disposal problem. The REAL process is a method for recycling the aluminium from the sludge. In a first step, the aluminium hydroxide is dissolved in sulphuric acid. In a second step, an ultra filtration will separate all suspended matter and large molecules, leaving a concentrate of 15-20% dry solids. The permeate will contain the trivalent aluminium ions together with 30-50% of the organic contaminants. In a third step, by concentrating the permeate in a nano filter, the concentration of aluminium will be high enough to, in a fourth step, be precipitated with potassium sulphate to form a pure crystal: potassium aluminium sulphate. The potassium aluminium sulphate is comparable to standard aluminium sulphate. The process will give a residual in form of a concentrate from the ultra filtration, representing a few per cent of the incoming volume. This paper presents the results from a long time pilot-scale continuous test run at Västerås water works in Sweden, as well as calculations of costs for full-scale operations.

  5. Design and Processing of a Novel Chaos-Based Stepped Frequency Synthesized Wideband Radar Signal.

    PubMed

    Zeng, Tao; Chang, Shaoqiang; Fan, Huayu; Liu, Quanhua

    2018-03-26

    The linear stepped frequency and linear frequency shift keying (FSK) signal has been widely used in radar systems. However, such linear modulation signals suffer from the range-Doppler coupling that degrades radar multi-target resolution. Moreover, the fixed frequency-hopping or frequency-coded sequence can be easily predicted by the interception receiver in the electronic countermeasures (ECM) environments, which limits radar anti-jamming performance. In addition, the single FSK modulation reduces the radar low probability of intercept (LPI) performance, for it cannot achieve a large time-bandwidth product. To solve such problems, we propose a novel chaos-based stepped frequency (CSF) synthesized wideband signal in this paper. The signal introduces chaotic frequency hopping between the coherent stepped frequency pulses, and adopts a chaotic frequency shift keying (CFSK) and phase shift keying (PSK) composited coded modulation in a subpulse, called CSF-CFSK/PSK. Correspondingly, the processing method for the signal has been proposed. According to our theoretical analyses and the simulations, the proposed signal and processing method achieve better multi-target resolution and LPI performance. Furthermore, flexible modulation is able to increase the robustness against identification of the interception receiver and improve the anti-jamming performance of the radar.

  6. Solving satisfiability problems using a novel microarray-based DNA computer.

    PubMed

    Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning

    2007-01-01

    An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.

  7. SLIDE - a web-based tool for interactive visualization of large-scale -omics data.

    PubMed

    Ghosh, Soumita; Datta, Abhik; Tan, Kaisen; Choi, Hyungwon

    2018-06-28

    Data visualization is often regarded as a post hoc step for verifying statistically significant results in the analysis of high-throughput data sets. This common practice leaves a large amount of raw data behind, from which more information can be extracted. However, existing solutions do not provide capabilities to explore large-scale raw datasets using biologically sensible queries, nor do they allow user interaction based real-time customization of graphics. To address these drawbacks, we have designed an open-source, web-based tool called Systems-Level Interactive Data Exploration, or SLIDE to visualize large-scale -omics data interactively. SLIDE's interface makes it easier for scientists to explore quantitative expression data in multiple resolutions in a single screen. SLIDE is publicly available under BSD license both as an online version as well as a stand-alone version at https://github.com/soumitag/SLIDE. Supplementary Information are available at Bioinformatics online.

  8. Observations of the Dynamic Connectivity of the Non-Wetting Phase During Steady State Flow at the Pore Scale Using 3D X-ray Microtomography

    NASA Astrophysics Data System (ADS)

    Reynolds, C. A.; Menke, H. P.; Blunt, M. J.; Krevor, S. C.

    2015-12-01

    We observe a new type of non-wetting phase flow using time-resolved pore scale imaging. The traditional conceptual model of drainage involves a non-wetting phase invading a porous medium saturated with a wetting phase as either a fixed, connected flow path through the centres of pores or as discrete ganglia which move individually through the pore space, depending on the capillary number. We observe a new type of flow behaviour at low capillary number in which the flow of the non-wetting phase occurs through networks of persistent ganglia that occupy the large pores but continuously rearrange their connectivity (Figure 1). Disconnections and reconnections occur randomly to provide short-lived pseudo-steady state flow paths between pores. This process is distinctly different to the notion of flowing ganglia which coalesce and break-up. The size distribution of ganglia is dependent on capillary number. Experiments were performed by co-injecting N2and 25 wt% KI brine into a Bentheimer sandstone core (4mm diameter, 35mm length) at 50°C and 10 MPa. Drainage was performed at three flow rates (0.04, 0.3 and 1 ml/min) at a constant fractional flow of 0.5 and the variation in ganglia populations and connectivity observed. We obtained images of the pore space during steady state flow with a time resolution of 43 s over 1-2 hours. Experiments were performed at the Diamond Light Source synchrotron. Figure 1. The position of N2 in the pore space during steady state flow is summed over 40 time steps. White indicates that N2 occupies the space over >38 time steps and red <5 time steps.

  9. Nonlinear time-series-based adaptive control applications

    NASA Technical Reports Server (NTRS)

    Mohler, R. R.; Rajkumar, V.; Zakrzewski, R. R.

    1991-01-01

    A control design methodology based on a nonlinear time-series reference model is presented. It is indicated by highly nonlinear simulations that such designs successfully stabilize troublesome aircraft maneuvers undergoing large changes in angle of attack as well as large electric power transients due to line faults. In both applications, the nonlinear controller was significantly better than the corresponding linear adaptive controller. For the electric power network, a flexible AC transmission system with series capacitor power feedback control is studied. A bilinear autoregressive moving average reference model is identified from system data, and the feedback control is manipulated according to a desired reference state. The control is optimized according to a predictive one-step quadratic performance index. A similar algorithm is derived for control of rapid changes in aircraft angle of attack over a normally unstable flight regime. In the latter case, however, a generalization of a bilinear time-series model reference includes quadratic and cubic terms in angle of attack.

  10. Energy-momentum conserving higher-order time integration of nonlinear dynamics of finite elastic fiber-reinforced continua

    NASA Astrophysics Data System (ADS)

    Erler, Norbert; Groß, Michael

    2015-05-01

    Since many years the relevance of fibre-reinforced polymers is steadily increasing in fields of engineering, especially in aircraft and automotive industry. Due to the high strength in fibre direction, but the possibility of lightweight construction, these composites replace more and more traditional materials as metals. Fibre-reinforced polymers are often manufactured from glass or carbon fibres as attachment parts or from steel or nylon cord as force transmission parts. Attachment parts are mostly subjected to small strains, but force transmission parts usually suffer large deformations in at least one direction. Here, a geometrically nonlinear formulation is necessary. Typical examples are helicopter rotor blades, where the fibres have the function to stabilize the structure in order to counteract large centrifugal forces. For long-run analyses of rotor blade deformations, we have to apply numerically stable time integrators for anisotropic materials. This paper presents higher-order accurate and numerically stable time stepping schemes for nonlinear elastic fibre-reinforced continua with anisotropic stress behaviour.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  12. Flowfield predictions for multiple body launch vehicles

    NASA Technical Reports Server (NTRS)

    Deese, Jerry E.; Pavish, D. L.; Johnson, Jerry G.; Agarwal, Ramesh K.; Soni, Bharat K.

    1992-01-01

    A method is developed for simulating inviscid and viscous flow around multicomponent launch vehicles. Grids are generated by the GENIE general-purpose grid-generation code, and the flow solver is a finite-volume Runge-Kutta time-stepping method. Turbulence effects are simulated using Baldwin and Lomax (1978) turbulence model. Calculations are presented for three multibody launch vehicle configurations: one with two small-diameter solid motors, one with nine small-diameter solid motors, and one with three large-diameter solid motors.

  13. Aerospace vehicle design, spacecraft section. Volume 2

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The next major step in the evolution of the space program is the exploration of the planet Mars. In preparation for this, much research is needed on the problem of surveying the planet surface. An aircraft appears to be a viable solution because it can carry men and equipment large distances in a short period of time as compared with ground transportation. The problems and design of an aircraft which would be able to survey the planet Mars are examined.

  14. Verlet scheme non-conservativeness for simulation of spherical particles collisional dynamics and method of its compensation

    NASA Astrophysics Data System (ADS)

    Savin, Andrei V.; Smirnov, Petr G.

    2018-05-01

    Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.

  15. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  16. Development of a flash flood warning system based on real-time radar data and process-based erosion modelling

    NASA Astrophysics Data System (ADS)

    Schindewolf, Marcus; Kaiser, Andreas; Buchholtz, Arno; Schmidt, Jürgen

    2017-04-01

    Extreme rainfall events and resulting flash floods led to massive devastations in Germany during spring 2016. The study presented aims on the development of a early warning system, which allows the simulation and assessment of negative effects on infrastructure by radar-based heavy rainfall predictions, serving as input data for the process-based soil loss and deposition model EROSION 3D. Our approach enables a detailed identification of runoff and sediment fluxes in agricultural used landscapes. In a first step, documented historical events were analyzed concerning the accordance of measured radar rainfall and large scale erosion risk maps. A second step focused on a small scale erosion monitoring via UAV of source areas of heavy flooding events and a model reconstruction of the processes involved. In all examples damages were caused to local infrastructure. Both analyses are promising in order to detect runoff and sediment delivering areas even in a high temporal and spatial resolution. Results prove the important role of late-covering crops such as maize, sugar beet or potatoes in runoff generation. While e.g. winter wheat positively affects extensive runoff generation on undulating landscapes, massive soil loss and thus muddy flows are observed and depicted in model results. Future research aims on large scale model parameterization and application in real time, uncertainty estimation of precipitation forecast and interface developments.

  17. Quantum computation in the analysis of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil

    2004-08-01

    Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.

  18. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  19. Efficiency and flexibility using implicit methods within atmosphere dycores

    NASA Astrophysics Data System (ADS)

    Evans, K. J.; Archibald, R.; Norman, M. R.; Gardner, D. J.; Woodward, C. S.; Worley, P.; Taylor, M.

    2016-12-01

    A suite of explicit and implicit methods are evaluated for a range of configurations of the shallow water dynamical core within the spectral-element Community Atmosphere Model (CAM-SE) to explore their relative computational performance. The configurations are designed to explore the attributes of each method under different but relevant model usage scenarios including varied spectral order within an element, static regional refinement, and scaling to large problem sizes. The limitations and benefits of using explicit versus implicit, with different discretizations and parameters, are discussed in light of trade-offs such as MPI communication, memory, and inherent efficiency bottlenecks. For the regionally refined shallow water configurations, the implicit BDF2 method is about the same efficiency as an explicit Runge-Kutta method, without including a preconditioner. Performance of the implicit methods with the residual function executed on a GPU is also presented; there is speed up for the residual relative to a CPU, but overwhelming transfer costs motivate moving more of the solver to the device. Given the performance behavior of implicit methods within the shallow water dynamical core, the recommendation for future work using implicit solvers is conditional based on scale separation and the stiffness of the problem. The strong growth of linear iterations with increasing resolution or time step size is the main bottleneck to computational efficiency. Within the hydrostatic dynamical core, of CAM-SE, we present results utilizing approximate block factorization preconditioners implemented using the Trilinos library of solvers. They reduce the cost of linear system solves and improve parallel scalability. We provide a summary of the remaining efficiency considerations within the preconditioner and utilization of the GPU, as well as a discussion about the benefits of a time stepping method that provides converged and stable solutions for a much wider range of time step sizes. As more complex model components, for example new physics and aerosols, are connected in the model, having flexibility in the time stepping will enable more options for combining and resolving multiple scales of behavior.

  20. A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories

    NASA Astrophysics Data System (ADS)

    Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan

    2015-07-01

    Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans can help improve dose conformity, homogeneity, and organ sparing simultaneously using the same beam trajectory length and delivery time as a coplanar VMAT plan.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, F.; Banks, J. W.; Henshaw, W. D.

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  2. The prevalence of upright non-stepping time in comparison to stepping time in 11-13 year old school children across seasons.

    PubMed

    McCrorie, P Rw; Duncan, E; Granat, M H; Stansfield, B W

    2012-11-01

    Evidence suggests that behaviours such as standing are beneficial for our health. Unfortunately, little is known of the prevalence of this state, its importance in relation to time spent stepping or variation across seasons. The aim of this study was to quantify, in young adolescents, the prevalence and seasonal changes in time spent upright and not stepping (UNSt(time)) as well as time spent upright and stepping (USt(time)), and their contribution to overall upright time (U(time)). Thirty-three adolescents (12.2 ± 0.3 y) wore the activPAL activity monitor during four school days on two occasions: November/December (winter) and May/June (summer). UNSt(time) contributed 60% of daily U(time) at winter (Mean = 196 min) and 53% at summer (Mean = 171 min); a significant seasonal effect, p < 0.001. USt(time) was significantly greater in summer compared to winter (153 min versus 131 min, p < 0.001). The effects in UNSt(time) could be explained through significant seasonal differences during the school hours (09:00-16:00), whereas the effects in USt(time) could be explained through significant seasonal differences in the evening period (16:00-22:00). Adolescents spent a greater amount of time upright and not stepping than they did stepping, in both winter and summer. The observed seasonal effects for both UNSt(time) and USt(time) provide important information for behaviour change intervention programs.

  3. Large scale crystallization of protein pharmaceuticals in microgravity via temperature change

    NASA Technical Reports Server (NTRS)

    Long, Marianna M.

    1992-01-01

    The major objective of this research effort is the temperature driven growth of protein crystals in large batches in the microgravity environment of space. Pharmaceutical houses are developing protein products for patient care, for example, human insulin, human growth hormone, interferons, and tissue plasminogen activator or TPA, the clot buster for heart attack victims. Except for insulin, these are very high value products; they are extremely potent in small quantities and have a great value per gram of material. It is feasible that microgravity crystallization can be a cost recoverable, economically sound final processing step in their manufacture. Large scale protein crystal growth in microgravity has significant advantages from the basic science and the applied science standpoints. Crystal growth can proceed unhindered due to lack of surface effects. Dynamic control is possible and relatively easy. The method has the potential to yield large quantities of pure crystalline product. Crystallization is a time honored procedure for purifying organic materials and microgravity crystallization could be the final step to remove trace impurities from high value protein pharmaceuticals. In addition, microgravity grown crystals could be the final formulation for those medicines that need to be administered in a timed release fashion. Long lasting insulin, insulin lente, is such a product. Also crystalline protein pharmaceuticals are more stable for long-term storage. Temperature, as the initiation step, has certain advantages. Again, dynamic control of the crystallization process is possible and easy. A temperature step is non-invasive and is the most subtle way to control protein solubility and therefore crystallization. Seeding is not necessary. Changes in protein and precipitant concentrations and pH are not necessary. Finally, this method represents a new way to crystallize proteins in space that takes advantage of the unique microgravity environment. The results from two flights showed that the hardware performed perfectly, many crystals were produced, and they were much larger than their ground grown controls. Morphometric analysis was done on over 4,000 crystals to establish crystal size, size distribution, and relative size. Space grown crystals were remarkably larger than their earth grown counterparts and crystal size was a function of PCF volume. That size distribution for the space grown crystals was a function of PCF volume may indicate that ultimate size was a function of temperature gradient. Since the insulin protein concentration was very low, 0.4 mg/ml, the size distribution could also be following the total amount of protein in each of the PCF's. X-ray analysis showed that the bigger space grown insulin crystals diffracted to higher resolution than their ground grown controls. When the data were normalized for size, they still indicated that the space crystals were better than the ground crystals.

  4. Immediate Effects of Clock-Turn Strategy on the Pattern and Performance of Narrow Turning in Persons With Parkinson Disease.

    PubMed

    Yang, Wen-Chieh; Hsu, Wei-Li; Wu, Ruey-Meei; Lin, Kwan-Hwa

    2016-10-01

    Turning difficulty is common in people with Parkinson disease (PD). The clock-turn strategy is a cognitive movement strategy to improve turning performance in people with PD despite its effects are unverified. Therefore, this study aimed to investigate the effects of the clock-turn strategy on the pattern of turning steps, turning performance, and freezing of gait during a narrow turning, and how these effects were influenced by concurrent performance of a cognitive task (dual task). Twenty-five people with PD were randomly assigned to the clock-turn or usual-turn group. Participants performed the Timed Up and Go test with and without concurrent cognitive task during the medication OFF period. The clock-turn group performed the Timed Up and Go test using the clock-turn strategy, whereas participants in the usual-turn group performed in their usual manner. Measurements were taken during the 180° turn of the Timed Up and Go test. The pattern of turning steps was evaluated by step time variability and step time asymmetry. Turning performance was evaluated by turning time and number of turning steps. The number and duration of freezing of gait were calculated by video review. The clock-turn group had lower step time variability and step time asymmetry than the usual-turn group. Furthermore, the clock-turn group turned faster with fewer freezing of gait episodes than the usual-turn group. Dual task increased the step time variability and step time asymmetry in both groups but did not affect turning performance and freezing severity. The clock-turn strategy reduces turning time and freezing of gait during turning, probably by lowering step time variability and asymmetry. Dual task compromises the effects of the clock-turn strategy, suggesting a competition for attentional resources.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A141).

  5. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  6. Effect of a perturbation-based balance training program on compensatory stepping and grasping reactions in older adults: a randomized controlled trial.

    PubMed

    Mansfield, Avril; Peters, Amy L; Liu, Barbara A; Maki, Brian E

    2010-04-01

    Compensatory stepping and grasping reactions are prevalent responses to sudden loss of balance and play a critical role in preventing falls. The ability to execute these reactions effectively is impaired in older adults. The purpose of this study was to evaluate a perturbation-based balance training program designed to target specific age-related impairments in compensatory stepping and grasping balance recovery reactions. This was a double-blind randomized controlled trial. The study was conducted at research laboratories in a large urban hospital. Thirty community-dwelling older adults (aged 64-80 years) with a recent history of falls or self-reported instability participated in the study. Participants were randomly assigned to receive either a 6-week perturbation-based (motion platform) balance training program or a 6-week control program involving flexibility and relaxation training. Features of balance reactions targeted by the perturbation-based program were: (1) multi-step reactions, (2) extra lateral steps following anteroposterior perturbations, (3) foot collisions following lateral perturbations, and (4) time to complete grasping reactions. The reactions were evoked during testing by highly unpredictable surface translation and cable pull perturbations, both of which differed from the perturbations used during training. /b> Compared with the control program, the perturbation-based training led to greater reductions in frequency of multi-step reactions and foot collisions that were statistically significant for surface translations but not cable pulls. The perturbation group also showed significantly greater reduction in handrail contact time compared with the control group for cable pulls and a possible trend in this direction for surface translations. Further work is needed to determine whether a maintenance program is needed to retain the training benefits and to assess whether these benefits reduce fall risk in daily life. Perturbation-based training shows promise as an effective intervention to improve the ability of older adults to prevent themselves from falling when they lose their balance.

  7. Comparison of step-by-step kinematics in repeated 30m sprints in female soccer players.

    PubMed

    van den Tillaar, Roland

    2018-01-04

    The aim of this study was to compare kinematics in repeated 30m sprints in female soccer players. Seventeen subjects performed seven 30m sprints every 30s in one session. Kinematics were measured with an infrared contact mat and laser gun, and running times with an electronic timing device. The main findings were that sprint times increased in the repeated sprint ability test. The main changes in kinematics during the repeated sprint ability test were increased contact time and decreased step frequency, while no change in step length was observed. The step velocity increased in almost each step until the 14, which occurred around 22m. After this, the velocity was stable until the last step, when it decreased. This increase in step velocity was mainly caused by the increased step length and decreased contact times. It was concluded that the fatigue induced in repeated 30m sprints in female soccer players resulted in decreased step frequency and increased contact time. Employing this approach in combination with a laser gun and infrared mat for 30m makes it very easy to analyse running kinematics in repeated sprints in training. This extra information gives the athlete, coach and sports scientist the opportunity to give more detailed feedback and help to target these changes in kinematics better to enhance repeated sprint performance.

  8. Time-Series Forecast Modeling on High-Bandwidth Network Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less

  9. Time-Series Forecast Modeling on High-Bandwidth Network Measurements

    DOE PAGES

    Yoo, Wucherl; Sim, Alex

    2016-06-24

    With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less

  10. Visualization of nanocrystal breathing modes at extreme strains

    NASA Astrophysics Data System (ADS)

    Szilagyi, Erzsi; Wittenberg, Joshua S.; Miller, Timothy A.; Lutker, Katie; Quirin, Florian; Lemke, Henrik; Zhu, Diling; Chollet, Matthieu; Robinson, Joseph; Wen, Haidan; Sokolowski-Tinten, Klaus; Lindenberg, Aaron M.

    2015-03-01

    Nanoscale dimensions in materials lead to unique electronic and structural properties with applications ranging from site-specific drug delivery to anodes for lithium-ion batteries. These functional properties often involve large-amplitude strains and structural modifications, and thus require an understanding of the dynamics of these processes. Here we use femtosecond X-ray scattering techniques to visualize, in real time and with atomic-scale resolution, light-induced anisotropic strains in nanocrystal spheres and rods. Strains at the percent level are observed in CdS and CdSe samples, associated with a rapid expansion followed by contraction along the nanosphere or nanorod radial direction driven by a transient carrier-induced stress. These morphological changes occur simultaneously with the first steps in the melting transition on hundreds of femtosecond timescales. This work represents the first direct real-time probe of the dynamics of these large-amplitude strains and shape changes in few-nanometre-scale particles.

  11. Commensurability-driven structural defects in double emulsions produced with two-step microfluidic techniques.

    PubMed

    Schmit, Alexandre; Salkin, Louis; Courbin, Laurent; Panizza, Pascal

    2014-07-14

    The combination of two drop makers such as flow focusing geometries or ┬ junctions is commonly used in microfluidics to fabricate monodisperse double emulsions and novel fluid-based materials. Here we investigate the physics of the encapsulation of small droplets inside large drops that is at the core of such processes. The number of droplets per drop studied over time for large sequences of consecutive drops reveals that the dynamics of these systems are complex: we find a succession of well-defined elementary patterns and defects. We present a simple model based on a discrete approach that predicts the nature of these patterns and their non-trivial scheme of arrangement in a sequence as a function of the ratio of the two timescales of the problem, the production times of droplets and drops. Experiments validate our model as they concur very well with predictions.

  12. GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering.

    PubMed

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2016-01-01

    Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads.

  13. A dynamic measure of controllability and observability for the placement of actuators and sensors on large space structures

    NASA Technical Reports Server (NTRS)

    Vandervelde, W. E.; Carignan, C. R.

    1982-01-01

    The degree of controllability of a large space structure is found by a four step procedure: (1) finding the minimum control energy for driving the system from a given initial state to the origin in the prescribed time; (2) finding the region of initial state which can be driven to the origin with constrained control energy and time using optimal control strategy; (3) scaling the axes so that a unit displacement in every direction is equally important to control; and (4) finding the linear measurement of the weighted "volume" of the ellipsoid in the equicontrol space. For observability, the error covariance must be reduced toward zero using measurements optimally, and the criterion must be standardized by the magnitude of tolerable errors. The results obtained using these methods are applied to the vibration modes of a free-free beam.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell Feder and Mahmoud Z. Yousef

    Neutronics analysis to find nuclear heating rates and personnel dose rates were conducted in support of the integration of diagnostics in to the ITER Upper Port Plugs. Simplified shielding models of the Visible-Infrared diagnostic and of the ECH heating system were incorporated in to the ITER global CAD model. Results for these systems are representative of typical designs with maximum shielding and a small aperture (Vis-IR) and minimal shielding with a large aperture (ECH). The neutronics discrete-ordinates code ATTILA® and SEVERIAN® (the ATTILA parallel processing version) was used. Material properties and the 500 MW D-T volume source were taken frommore » the ITER “Brand Model” MCNP benchmark model. A biased quadrature set equivelant to Sn=32 and a scattering degree of Pn=3 were used along with a 46-neutron and 21-gamma FENDL energy subgrouping. Total nuclear heating (neutron plug gamma heating) in the upper port plugs ranged between 380 and 350 kW for the Vis-IR and ECH cases. The ECH or Large Aperture model exhibited lower total heating but much higher peak volumetric heating on the upper port plug structure. Personnel dose rates are calculated in a three step process involving a neutron-only transport calculation, the generation of activation volume sources at pre-defined time steps and finally gamma transport analyses are run for selected time steps. ANSI-ANS 6.1.1 1977 Flux-to-Dose conversion factors were used. Dose rates were evaluated for 1 full year of 500 MW DT operation which is comprised of 3000 1800-second pulses. After one year the machine is shut down for maintenance and personnel are permitted to access the diagnostic interspace after 2-weeks if dose rates are below 100 μSv/hr. Dose rates in the Visible-IR diagnostic model after one day of shutdown were 130 μSv/hr but fell below the limit to 90 μSv/hr 2-weeks later. The Large Aperture or ECH style shielding model exhibited higher and more persistent dose rates. After 1-day the dose rate was 230 μSv/hr but was still at 120 μSv/hr 4-weeks later. __________________________________________________« less

  15. Simulator for multilevel optimization research

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Young, K. C.

    1986-01-01

    A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.

  16. 40 CFR 141.81 - Applicability of corrosion control treatment steps to small, medium-size and large water systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Applicability of corrosion control treatment steps to small, medium-size and large water systems. 141.81 Section 141.81 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper...

  17. Comparison of the Screening Tests for Gestational Diabetes Mellitus between "One-Step" and "Two-Step" Methods among Thai Pregnant Women.

    PubMed

    Luewan, Suchaya; Bootchaingam, Phenphan; Tongsong, Theera

    2018-01-01

    To compare the prevalence and pregnancy outcomes of GDM between those screened by the "one-step" (75 gm GTT) and "two-step" (100 gm GTT) methods. A prospective study was conducted on singleton pregnancies at low or average risk of GDM. All were screened between 24 and 28 weeks, using the one-step or two-step method based on patients' preference. The primary outcome was prevalence of GDM, and secondary outcomes included birthweight, gestational age, rates of preterm birth, small/large-for-gestational age, low Apgar scores, cesarean section, and pregnancy-induced hypertension. A total of 648 women were screened: 278 in the one-step group and 370 in the two-step group. The prevalence of GDM was significantly higher in the one-step group; 32.0% versus 10.3%. Baseline characteristics and pregnancy outcomes in both groups were comparable. However, mean birthweight was significantly higher among pregnancies with GDM diagnosed by the two-step approach (3204 ± 555 versus 3009 ± 666 g; p =0.022). Likewise, the rate of large-for-date tended to be higher in the two-step group, but was not significant. The one-step approach is associated with very high prevalence of GDM among Thai population, without clear evidence of better outcomes. Thus, this approach may not be appropriate for screening in a busy antenatal care clinic like our setting or other centers in developing countries.

  18. Fluid flow and convective transport of solutes within the intervertebral disc.

    PubMed

    Ferguson, Stephen J; Ito, Keita; Nolte, Lutz P

    2004-02-01

    Previous experimental and analytical studies of solute transport in the intervertebral disc have demonstrated that for small molecules diffusive transport alone fulfils the nutritional needs of disc cells. It has been often suggested that fluid flow into and within the disc may enhance the transport of larger molecules. The goal of the study was to predict the influence of load-induced interstitial fluid flow on mass transport in the intervertebral disc. An iterative procedure was used to predict the convective transport of physiologically relevant molecules within the disc. An axisymmetric, poroelastic finite-element structural model of the disc was developed. The diurnal loading was divided into discrete time steps. At each time step, the fluid flow within the disc due to compression or swelling was calculated. A sequentially coupled diffusion/convection model was then employed to calculate solute transport, with a constant concentration of solute being provided at the vascularised endplates and outer annulus. Loading was simulated for a complete diurnal cycle, and the relative convective and diffusive transport was compared for solutes with molecular weights ranging from 400 Da to 40 kDa. Consistent with previous studies, fluid flow did not enhance the transport of low-weight solutes. During swelling, interstitial fluid flow increased the unidirectional penetration of large solutes by approximately 100%. Due to the bi-directional temporal nature of disc loading, however, the net effect of convective transport over a full diurnal cycle was more limited (30% increase). Further study is required to determine the significance of large solutes and the timing of their delivery for disc physiology.

  19. Melatonin: a universal time messenger.

    PubMed

    Erren, Thomas C; Reiter, Russel J

    2015-01-01

    Temporal organization plays a key role in humans, and presumably all species on Earth. A core building block of the chronobiological architecture is the master clock, located in the suprachi asmatic nuclei [SCN], which organizes "when" things happen in sub-cellular biochemistry, cells, organs and organisms, including humans. Conceptually, time messenging should follow a 5 step-cascade. While abundant evidence suggests how steps 1 through 4 work, step 5 of "how is central time information transmitted througout the body?" awaits elucidation. Step 1: Light provides information on environmental (external) time; Step 2: Ocular interfaces between light and biological (internal) time are intrinsically photosensitive retinal ganglion cells [ipRGS] and rods and cones; Step 3: Via the retinohypothalamic tract external time information reaches the light-dependent master clock in the brain, viz the SCN; Step 4: The SCN translate environmental time information into biological time and distribute this information to numerous brain structures via a melanopsin-based network. Step 5: Melatonin, we propose, transmits, or is a messenger of, internal time information to all parts of the body to allow temporal organization which is orchestrated by the SCN. Key reasons why we expect melatonin to have such role include: First, melatonin, as the chemical expression of darkness, is centrally involved in time- and timing-related processes such as encoding clock and calendar information in the brain; Second, melatonin travels throughout the body without limits and is thus a ubiquitous molecule. The chemial conservation of melatonin in all tested species could make this molecule a candidate for a universal time messenger, possibly constituting a legacy of an all-embracing evolutionary history.

  20. High-latitude geomagnetic disturbances during ascending solar cycle 24

    NASA Astrophysics Data System (ADS)

    Peitso, Pyry; Tanskanen, Eija; Stolle, Claudia; Berthou Lauritsen, Nynne; Matzka, Jürgen

    2015-04-01

    High-latitude regions are very convenient for study of several space weather phenomena such as substorms. Large geographic coverage as well as long time series of data are essential due to the global nature of space weather and the long duration of solar cycles. We will examine geomagnetic activity in Greenland from magnetic field measurements taken by DTU (Technical University of Denmark) magnetometers during the years 2010 to 2014. The study uses data from 13 magnetometer stations located on the east coast of Greenland and one located on the west coast. The original measurements are in one second resolution, thus the amount of data is quite large. Magnetic field H component (positive direction towards the magnetic north) was used throughout the study. Data processing will be described from calibration of original measurements to plotting of long time series. Calibration consists of determining the quiet hour of a given day and reducing the average of that hour from all the time steps of the day. This normalizes the measurements and allows for better comparison between different time steps. In addition to the full time line of measurements, daily, monthly and yearly averages will be provided for all stations. Differential calculations on the change of the H component will also be made available for the duration of the full data set. Envelope curve plots will be presented for duration of the time line. Geomagnetic conditions during winter and summer will be compared to examine seasonal variation. Finally the measured activity will be compared to NOAA (National Oceanic and Atmospheric Administration) issued geomagnetic space weather alerts from 2010 to 2014. Calculations and plotting of measurement data were done with MATLAB. M_map toolbox was used for plotting of maps featured in the study (http://www2.ocgy.ubc.ca/~rich/map.html). The study was conducted as a part of the ReSoLVE (Research on Solar Long-term Variability and Effects) Center of Excellence.

  1. jInv: A Modular and Scalable Framework for Electromagnetic Inverse Problems

    NASA Astrophysics Data System (ADS)

    Belliveau, P. T.; Haber, E.

    2016-12-01

    Inversion is a key tool in the interpretation of geophysical electromagnetic (EM) data. Three-dimensional (3D) EM inversion is very computationally expensive and practical software for inverting large 3D EM surveys must be able to take advantage of high performance computing (HPC) resources. It has traditionally been difficult to achieve those goals in a high level dynamic programming environment that allows rapid development and testing of new algorithms, which is important in a research setting. With those goals in mind, we have developed jInv, a framework for PDE constrained parameter estimation problems. jInv provides optimization and regularization routines, a framework for user defined forward problems, and interfaces to several direct and iterative solvers for sparse linear systems. The forward modeling framework provides finite volume discretizations of differential operators on rectangular tensor product meshes and tetrahedral unstructured meshes that can be used to easily construct forward modeling and sensitivity routines for forward problems described by partial differential equations. jInv is written in the emerging programming language Julia. Julia is a dynamic language targeted at the computational science community with a focus on high performance and native support for parallel programming. We have developed frequency and time-domain EM forward modeling and sensitivity routines for jInv. We will illustrate its capabilities and performance with two synthetic time-domain EM inversion examples. First, in airborne surveys, which use many sources, we achieve distributed memory parallelism by decoupling the forward and inverse meshes and performing forward modeling for each source on small, locally refined meshes. Secondly, we invert grounded source time-domain data from a gradient array style induced polarization survey using a novel time-stepping technique that allows us to compute data from different time-steps in parallel. These examples both show that it is possible to invert large scale 3D time-domain EM datasets within a modular, extensible framework written in a high-level, easy to use programming language.

  2. A Variable Step-Size Proportionate Affine Projection Algorithm for Identification of Sparse Impulse Response

    NASA Astrophysics Data System (ADS)

    Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong

    2009-12-01

    Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.

  3. Rapid Two-Step Procedure for Large-Scale Purification of Pediocin-Like Bacteriocins and Other Cationic Antimicrobial Peptides from Complex Culture Medium

    PubMed Central

    Uteng, Marianne; Hauge, Håvard Hildeng; Brondz, Ilia; Nissen-Meyer, Jon; Fimland, Gunnar

    2002-01-01

    A rapid and simple two-step procedure suitable for both small- and large-scale purification of pediocin-like bacteriocins and other cationic peptides has been developed. In the first step, the bacterial culture was applied directly on a cation-exchange column (1-ml cation exchanger per 100-ml cell culture). Bacteria and anionic compounds passed through the column, and cationic bacteriocins were subsequently eluted with 1 M NaCl. In the second step, the bacteriocin fraction was applied on a low-pressure, reverse-phase column and the bacteriocins were detected as major optical density peaks upon elution with propanol. More than 80% of the activity that was initially in the culture supernatant was recovered in both purification steps, and the final bacteriocin preparation was more than 90% pure as judged by analytical reverse-phase chromatography and capillary electrophoresis. PMID:11823243

  4. Flow resistance dynamics in step‐pool stream channels: 1. Large woody debris and controls on total resistance

    USGS Publications Warehouse

    Wilcox, Andrew C.; Wohl, Ellen E.

    2006-01-01

    Flow resistance dynamics in step‐pool channels were investigated through physical modeling using a laboratory flume. Variables contributing to flow resistance in step‐pool channels were manipulated in order to measure the effects of various large woody debris (LWD) configurations, steps, grains, discharge, and slope on total flow resistance. This entailed nearly 400 flume runs, organized into a series of factorial experiments. Factorial analyses of variance indicated significant two‐way and three‐way interaction effects between steps, grains, and LWD, illustrating the complexity of flow resistance in these channels. Interactions between steps and LWD resulted in substantially greater flow resistance for steps with LWD than for steps lacking LWD. LWD position contributed to these interactions, whereby LWD pieces located near the lip of steps, analogous to step‐forming debris in natural channels, increased the effective height of steps and created substantially higher flow resistance than pieces located farther upstream on step treads. Step geometry and LWD density and orientation also had highly significant effects on flow resistance. Flow resistance dynamics and the resistance effect of bed roughness configurations were strongly discharge‐dependent; discharge had both highly significant main effects on resistance and highly significant interactions with all other variables.

  5. Direct observation of the myosin Va recovery stroke that contributes to unidirectional stepping along actin.

    PubMed

    Shiroguchi, Katsuyuki; Chin, Harvey F; Hannemann, Diane E; Muneyuki, Eiro; De La Cruz, Enrique M; Kinosita, Kazuhiko

    2011-04-01

    Myosins are ATP-driven linear molecular motors that work as cellular force generators, transporters, and force sensors. These functions are driven by large-scale nucleotide-dependent conformational changes, termed "strokes"; the "power stroke" is the force-generating swinging of the myosin light chain-binding "neck" domain relative to the motor domain "head" while bound to actin; the "recovery stroke" is the necessary initial motion that primes, or "cocks," myosin while detached from actin. Myosin Va is a processive dimer that steps unidirectionally along actin following a "hand over hand" mechanism in which the trailing head detaches and steps forward ∼72 nm. Despite large rotational Brownian motion of the detached head about a free joint adjoining the two necks, unidirectional stepping is achieved, in part by the power stroke of the attached head that moves the joint forward. However, the power stroke alone cannot fully account for preferential forward site binding since the orientation and angle stability of the detached head, which is determined by the properties of the recovery stroke, dictate actin binding site accessibility. Here, we directly observe the recovery stroke dynamics and fluctuations of myosin Va using a novel, transient caged ATP-controlling system that maintains constant ATP levels through stepwise UV-pulse sequences of varying intensity. We immobilized the neck of monomeric myosin Va on a surface and observed real time motions of bead(s) attached site-specifically to the head. ATP induces a transient swing of the neck to the post-recovery stroke conformation, where it remains for ∼40 s, until ATP hydrolysis products are released. Angle distributions indicate that the post-recovery stroke conformation is stabilized by ≥ 5 k(B)T of energy. The high kinetic and energetic stability of the post-recovery stroke conformation favors preferential binding of the detached head to a forward site 72 nm away. Thus, the recovery stroke contributes to unidirectional stepping of myosin Va.

  6. Ambulance Clinical Triage for Acute Stroke Treatment: Paramedic Triage Algorithm for Large Vessel Occlusion.

    PubMed

    Zhao, Henry; Pesavento, Lauren; Coote, Skye; Rodrigues, Edrich; Salvaris, Patrick; Smith, Karen; Bernard, Stephen; Stephenson, Michael; Churilov, Leonid; Yassi, Nawaf; Davis, Stephen M; Campbell, Bruce C V

    2018-04-01

    Clinical triage scales for prehospital recognition of large vessel occlusion (LVO) are limited by low specificity when applied by paramedics. We created the 3-step ambulance clinical triage for acute stroke treatment (ACT-FAST) as the first algorithmic LVO identification tool, designed to improve specificity by recognizing only severe clinical syndromes and optimizing paramedic usability and reliability. The ACT-FAST algorithm consists of (1) unilateral arm drift to stretcher <10 seconds, (2) severe language deficit (if right arm is weak) or gaze deviation/hemineglect assessed by simple shoulder tap test (if left arm is weak), and (3) eligibility and stroke mimic screen. ACT-FAST examination steps were retrospectively validated, and then prospectively validated by paramedics transporting culturally and linguistically diverse patients with suspected stroke in the emergency department, for the identification of internal carotid or proximal middle cerebral artery occlusion. The diagnostic performance of the full ACT-FAST algorithm was then validated for patients accepted for thrombectomy. In retrospective (n=565) and prospective paramedic (n=104) validation, ACT-FAST displayed higher overall accuracy and specificity, when compared with existing LVO triage scales. Agreement of ACT-FAST between paramedics and doctors was excellent (κ=0.91; 95% confidence interval, 0.79-1.0). The full ACT-FAST algorithm (n=60) assessed by paramedics showed high overall accuracy (91.7%), sensitivity (85.7%), specificity (93.5%), and positive predictive value (80%) for recognition of endovascular-eligible LVO. The 3-step ACT-FAST algorithm shows higher specificity and reliability than existing scales for clinical LVO recognition, despite requiring just 2 examination steps. The inclusion of an eligibility step allowed recognition of endovascular-eligible patients with high accuracy. Using a sequential algorithmic approach eliminates scoring confusion and reduces assessment time. Future studies will test whether field application of ACT-FAST by paramedics to bypass suspected patients with LVO directly to endovascular-capable centers can reduce delays to endovascular thrombectomy. © 2018 American Heart Association, Inc.

  7. A study of the impact of freezing on the lyophilization of a concentrated formulation with a high fill depth.

    PubMed

    Liu, Jinsong; Viverette, Todd; Virgin, Marlin; Anderson, Mitch; Paresh, Dalal

    2005-01-01

    The objective of this study was to evaluate the impact of freezing on the lyophilization of a concentrated formulation with a high fill depth. A model system consisting of a 15-mL fill of 15% (w/w) sulfobutylether 7-beta-cyclodextrin (SBECD) solution in a 30-mL vial was selected for this study. Various freezing methods including single-step freezing, two-step freezing with a super-cooling holding, annealing, vacuum-induced freezing, changing ice habit using tert-butyl-alcohol (TBA), ice nucleation with silver iodide (AgI), as well as combinations of some of the methods, were used in the lyophilization of this model system. This work demonstrated that the freezing process had a significant impact on primary drying rate and product quality of a concentrated formulation with a high fill depth. Annealing, vacuum-induced freezing, and addition of either TBA or an ice nucleating agent (AgI) to the formulation accelerated the subsequent ice sublimation process. Two-step freezing or addition of TBA improved the product quality by eliminating vertical heterogeneity within the cake. The combination of two-step freezing in conjunction with an annealing step was shown to be a method of choice for freezing in the lyophilization of a product with a high fill depth. In addition to being an effective method of freezing, it is most applicable for scaling up. An alternative approach is to add a certain amount of TBA to the formulation, if the TBA-formulation interaction or regulatory concerns can be demonstrated as not being an issue. An evaluation of vial size performed in this study showed that although utilizing large-diameter vials to reduce the fill depth can greatly shorten the cycle time of a single batch, it will substantially decrease the product throughput in a large-scale freeze-dryer.

  8. Comparison of step-by-step kinematics of resisted, assisted and unloaded 20-m sprint runs.

    PubMed

    van den Tillaar, Roland; Gamble, Paul

    2018-03-26

    This investigation examined step-by-step kinematics of sprint running acceleration. Using a randomised counterbalanced approach, 37 female team handball players (age 17.8 ± 1.6 years, body mass 69.6 ± 9.1 kg, height 1.74 ± 0.06 m) performed resisted, assisted and unloaded 20-m sprints within a single session. 20-m sprint times and step velocity, as well as step length, step frequency, contact and flight times of each step were evaluated for each condition with a laser gun and an infrared mat. Almost all measured parameters were altered for each step under the resisted and assisted sprint conditions (η 2  ≥ 0.28). The exception was step frequency, which did not differ between assisted and normal sprints. Contact time, flight time and step frequency at almost each step were different between 'fast' vs. 'slow' sub-groups (η 2  ≥ 0.22). Nevertheless overall both groups responded similarly to the respective sprint conditions. No significant differences in step length were observed between groups for the respective condition. It is possible that continued exposure to assisted sprinting might allow the female team-sports players studied to adapt their coordination to the 'over-speed' condition and increase step frequency. It is notable that step-by-step kinematics in these sprints were easy to obtain using relatively inexpensive equipment with possibilities of direct feedback.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoang Duc, Albert K., E-mail: albert.hoangduc.ucl@gmail.com; McClelland, Jamie; Modat, Marc

    Purpose: The aim of this study was to assess whether clinically acceptable segmentations of organs at risk (OARs) in head and neck cancer can be obtained automatically and efficiently using the novel “similarity and truth estimation for propagated segmentations” (STEPS) compared to the traditional “simultaneous truth and performance level estimation” (STAPLE) algorithm. Methods: First, 6 OARs were contoured by 2 radiation oncologists in a dataset of 100 patients with head and neck cancer on planning computed tomography images. Each image in the dataset was then automatically segmented with STAPLE and STEPS using those manual contours. Dice similarity coefficient (DSC) wasmore » then used to compare the accuracy of these automatic methods. Second, in a blind experiment, three separate and distinct trained physicians graded manual and automatic segmentations into one of the following three grades: clinically acceptable as determined by universal delineation guidelines (grade A), reasonably acceptable for clinical practice upon manual editing (grade B), and not acceptable (grade C). Finally, STEPS segmentations graded B were selected and one of the physicians manually edited them to grade A. Editing time was recorded. Results: Significant improvements in DSC can be seen when using the STEPS algorithm on large structures such as the brainstem, spinal canal, and left/right parotid compared to the STAPLE algorithm (all p < 0.001). In addition, across all three trained physicians, manual and STEPS segmentation grades were not significantly different for the brainstem, spinal canal, parotid (right/left), and optic chiasm (all p > 0.100). In contrast, STEPS segmentation grades were lower for the eyes (p < 0.001). Across all OARs and all physicians, STEPS produced segmentations graded as well as manual contouring at a rate of 83%, giving a lower bound on this rate of 80% with 95% confidence. Reduction in manual interaction time was on average 61% and 93% when automatic segmentations did and did not, respectively, require manual editing. Conclusions: The STEPS algorithm showed better performance than the STAPLE algorithm in segmenting OARs for radiotherapy of the head and neck. It can automatically produce clinically acceptable segmentation of OARs, with results as relevant as manual contouring for the brainstem, spinal canal, the parotids (left/right), and optic chiasm. A substantial reduction in manual labor was achieved when using STEPS even when manual editing was necessary.« less

  10. Estimating heterotrophic respiration at large scales: challenges, approaches, and next steps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond-Lamberty, Benjamin; Epron, Daniel; Harden, Jennifer W.

    2016-06-27

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of "Decomposition Functional Types" (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelsmore » to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present and discuss an example clustering analysis to show how model-produced annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from already-existing PFTs. A similar analysis, incorporating observational data, could form a basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with high-performance computing; rigorous testing of analytical results; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR at large scales.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakel, Allen J.; Conner, Cliff; Quigley, Kevin

    One of the missions of the Reduced Enrichment for Research and Test Reactors (RERTR) program (and now the National Nuclear Security Administrations Material Management and Minimization program) is to facilitate the use of low enriched uranium (LEU) targets for 99Mo production. The conversion from highly enriched uranium (HEU) to LEU targets will require five to six times more uranium to produce an equivalent amount of 99Mo. The work discussed here addresses the technical challenges encountered in the treatment of uranyl nitrate hexahydrate (UNH)/nitric acid solutions remaining after the dissolution of LEU targets. Specifically, the focus of this work is themore » calcination of the uranium waste from 99Mo production using LEU foil targets and the Modified Cintichem Process. Work with our calciner system showed that high furnace temperature, a large vent tube, and a mechanical shield are beneficial for calciner operation. One- and two-step direct calcination processes were evaluated. The high-temperature one-step process led to contamination of the calciner system. The two-step direct calcination process operated stably and resulted in a relatively large amount of material in the calciner cup. Chemically assisted calcination using peroxide was rejected for further work due to the difficulty in handling the products. Chemically assisted calcination using formic acid was rejected due to unstable operation. Chemically assisted calcination using oxalic acid was recommended, although a better understanding of its chemistry is needed. Overall, this work showed that the two-step direct calcination and the in-cup oxalic acid processes are the best approaches for the treatment of the UNH/nitric acid waste solutions remaining from dissolution of LEU targets for 99Mo production.« less

  12. The General Alcoholics Anonymous Tools of Recovery: The Adoption of 12-Step Practices and Beliefs

    PubMed Central

    Greenfield, Brenna L.; Tonigan, J. Scott

    2013-01-01

    Working the 12 steps is widely prescribed for Alcoholics Anonymous (AA) members although the relative merits of different methods for measuring step-work have received minimal attention and even less is known about how step-work predicts later substance use. The current study (1) compared endorsements of step-work on an face-valid or direct measure, the Alcoholics Anonymous Inventory (AAI), with an indirect measure of step-work, the General Alcoholics Anonymous Tools of Recovery (GAATOR), (2) evaluated the underlying factor structure of the GAATOR and changes in step-work over time, (3) examined changes in the endorsement of step-work over time, and (4) investigated how, if at all, 12-step-work predicted later substance use. New AA affiliates (N = 130) completed assessments at intake, 3, 6, and 9 months. Significantly more participants endorsed step-work on the GAATOR than on the AAI for nine of the 12 steps. An exploratory factor analysis revealed a two-factor structure for the GAATOR comprising Behavioral Step-Work and Spiritual Step-Work. Behavioral Step-Work did not change over time, but was predicted by having a sponsor, while Spiritual Step-Work decreased over time and increases were predicted by attending 12-step meetings or treatment. Behavioral Step-Work did not prospectively predict substance use. In contrast, Spiritual Step-Work predicted percent days abstinent, an effect that is consistent with recent work on the mediating effects of spiritual growth, AA, and increased abstinence. Behavioral and Spiritual Step-Work appear to be conceptually distinct components of step-work that have distinct predictors and unique impacts on outcomes. PMID:22867293

  13. Measuring border delay and crossing times at the US-Mexico border : part II. Step-by-step guidelines for implementing a radio frequency identification (RFID) system to measure border crossing and wait times.

    DOT National Transportation Integrated Search

    2012-06-01

    The purpose of these step-by-step guidelines is to assist in planning, designing, and deploying a system that uses radio frequency identification (RFID) technology to measure the time needed for commercial vehicles to complete the northbound border c...

  14. A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.

  15. Sampling large landscapes with small-scale stratification-User's Manual

    USGS Publications Warehouse

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  16. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  17. Mass imbalances in EPANET water-quality simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Janke, Robert; Taxon, Thomas N.

    EPANET is widely employed to simulate water quality in water distribution systems. However, the time-driven simulation approach used to determine concentrations of water-quality constituents provides accurate results, in general, only for small water-quality time steps; use of an adequately short time step may not be feasible. Overly long time steps can yield errors in concentrations and result in situations in which constituent mass is not conserved. Mass may not be conserved even when EPANET gives no errors or warnings. This paper explains how such imbalances can occur and provides examples of such cases; it also presents a preliminary event-driven approachmore » that conserves mass with a water-quality time step that is as long as the hydraulic time step. Results obtained using the current approach converge, or tend to converge, to those obtained using the new approach as the water-quality time step decreases. Improving the water-quality routing algorithm used in EPANET could eliminate mass imbalances and related errors in estimated concentrations.« less

  18. Tracking vortices in superconductors: Extracting singularities from a discretized complex scalar field evolving in time

    DOE PAGES

    Phillips, Carolyn L.; Guo, Hanqi; Peterka, Tom; ...

    2016-02-19

    In type-II superconductors, the dynamics of magnetic flux vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter field. Earlier, we introduced a method for extracting vortices from the discretized complex order parameter field generated by a large-scale simulation of vortex matter. With this method, at a fixed time step, each vortex [simplistically, a one-dimensional (1D) curve in 3D space] can be represented as a connected graph extracted from the discretized field. Here we extend this method as a function of time as well. A vortex now corresponds to a 2Dmore » space-time sheet embedded in 4D space time that can be represented as a connected graph extracted from the discretized field over both space and time. Vortices that interact by merging or splitting correspond to disappearance and appearance of holes in the connected graph in the time direction. This method of tracking vortices, which makes no assumptions about the scale or behavior of the vortices, can track the vortices with a resolution as good as the discretization of the temporally evolving complex scalar field. In addition, even details of the trajectory between time steps can be reconstructed from the connected graph. With this form of vortex tracking, the details of vortex dynamics in a model of a superconducting materials can be understood in greater detail than previously possible.« less

  19. Large field of view, fast and low dose multimodal phase-contrast imaging at high x-ray energy.

    PubMed

    Astolfo, Alberto; Endrizzi, Marco; Vittoria, Fabio A; Diemoz, Paul C; Price, Benjamin; Haig, Ian; Olivo, Alessandro

    2017-05-19

    X-ray phase contrast imaging (XPCI) is an innovative imaging technique which extends the contrast capabilities of 'conventional' absorption based x-ray systems. However, so far all XPCI implementations have suffered from one or more of the following limitations: low x-ray energies, small field of view (FOV) and long acquisition times. Those limitations relegated XPCI to a 'research-only' technique with an uncertain future in terms of large scale, high impact applications. We recently succeeded in designing, realizing and testing an XPCI system, which achieves significant steps toward simultaneously overcoming these limitations. Our system combines, for the first time, large FOV, high energy and fast scanning. Importantly, it is capable of providing high image quality at low x-ray doses, compatible with or even below those currently used in medical imaging. This extends the use of XPCI to areas which were unpractical or even inaccessible to previous XPCI solutions. We expect this will enable a long overdue translation into application fields such as security screening, industrial inspections and large FOV medical radiography - all with the inherent advantages of the XPCI multimodality.

  20. Highly Efficient Large-Scale Lentiviral Vector Concentration by Tandem Tangential Flow Filtration

    PubMed Central

    Cooper, Aaron R.; Patel, Sanjeet; Senadheera, Shantha; Plath, Kathrin; Kohn, Donald B.; Hollis, Roger P.

    2014-01-01

    Large-scale lentiviral vector (LV) concentration can be inefficient and time consuming, often involving multiple rounds of filtration and centrifugation. This report describes a simpler method using two tangential flow filtration (TFF) steps to concentrate liter-scale volumes of LV supernatant, achieving in excess of 2000-fold concentration in less than 3 hours with very high recovery (>97%). Large volumes of LV supernatant can be produced easily through the use of multi-layer flasks, each having 1720 cm2 surface area and producing ~560 mL of supernatant per flask. Combining the use of such flasks and TFF greatly simplifies large-scale production of LV. As a demonstration, the method is used to produce a very high titer LV (>1010 TU/mL) and transduce primary human CD34+ hematopoietic stem/progenitor cells at high final vector concentrations with no overt toxicity. A complex LV (STEMCCA) for induced pluripotent stem cell generation is also concentrated from low initial titer and used to transduce and reprogram primary human fibroblasts with no overt toxicity. Additionally, a generalized and simple multiplexed real- time PCR assay is described for lentiviral vector titer and copy number determination. PMID:21784103

  1. In vitro and in vivo testing of a novel recessed-step catheter for reflux-free convection-enhanced drug delivery to the brain.

    PubMed

    Gill, T; Barua, N U; Woolley, M; Bienemann, A S; Johnson, D E; S O'Sullivan; Murray, G; Fennelly, C; Lewis, O; Irving, C; Wyatt, M J; Moore, P; Gill, S S

    2013-09-30

    The optimisation of convection-enhanced drug delivery (CED) to the brain is fundamentally reliant on minimising drug reflux. The aim of this study was to evaluate the performance of a novel reflux-resistant CED catheter incorporating a recessed-step and to compare its performance to previously described stepped catheters. The in vitro performance of the recessed-step catheter was compared to a conventional "one-step" catheter with a single transition in outer diameter (OD) at the catheter tip, and a "two-step" design comprising two distal transitions in OD. The volumes of distribution and reflux were compared by performing infusions of Trypan blue into agarose gels. The in vivo performance of the recessed-step catheter was then analysed in a large animal model by performing infusions of 0.2% Gadolinium-DTPA in Large White/Landrace pigs. The recessed-step catheter demonstrated significantly higher volumes of distribution than the one-step and two-step catheters (p=0.0001, one-way ANOVA). No reflux was detected until more than 100 ul had been delivered via the recessed-step catheter, whilst reflux was detected after infusion of only 25 ul via the 2 non-recessed catheters. The recessed-step design also showed superior reflux resistance to a conventational one-step catheter in vivo. Reflux-free infusions were achieved in the thalamus, putamen and white matter at a maximum infusion rate of 5 ul/min using the recessed-step design. The novel recessed-step catheter described in this study shows significant potential for the achievement of predictable high volume, high flow rate infusions whilst minimising the risk of reflux. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. A novel grid-based mesoscopic model for evacuation dynamics

    NASA Astrophysics Data System (ADS)

    Shi, Meng; Lee, Eric Wai Ming; Ma, Yi

    2018-05-01

    This study presents a novel grid-based mesoscopic model for evacuation dynamics. In this model, the evacuation space is discretised into larger cells than those used in microscopic models. This approach directly computes the dynamic changes crowd densities in cells over the course of an evacuation. The density flow is driven by the density-speed correlation. The computation is faster than in traditional cellular automata evacuation models which determine density by computing the movements of each pedestrian. To demonstrate the feasibility of this model, we apply it to a series of practical scenarios and conduct a parameter sensitivity study of the effect of changes in time step δ. The simulation results show that within the valid range of δ, changing δ has only a minor impact on the simulation. The model also makes it possible to directly acquire key information such as bottleneck areas from a time-varied dynamic density map, even when a relatively large time step is adopted. We use the commercial software AnyLogic to evaluate the model. The result shows that the mesoscopic model is more efficient than the microscopic model and provides more in-situ details (e.g., pedestrian movement pattern) than the macroscopic models.

  3. Implicit flux-split Euler schemes for unsteady aerodynamic analysis involving unstructured dynamic meshes

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1990-01-01

    Improved algorithms for the solution of the time-dependent Euler equations are presented for unsteady aerodynamic analysis involving unstructured dynamic meshes. The improvements have been developed recently to the spatial and temporal discretizations used by unstructured grid flow solvers. The spatial discretization involves a flux-split approach which is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves an implicit time-integration shceme using a Gauss-Seidel relaxation procedure which is computationally efficient for either steady or unsteady flow problems. For example, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady and unsteady flow results are presented for the NACA 0012 airfoil to demonstrate applications of the new Euler solvers. The unsteady results were obtained for the airfoil pitching harmonically about the quarter chord. The resulting instantaneous pressure distributions and lift and moment coefficients during a cycle of motion compare well with experimental data. The paper presents a description of the Euler solvers along with results and comparisons which assess the capability.

  4. Four decades of implicit Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B.

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  5. Multiple magnetization steps and plateaus across the antiferromagnetic to ferromagnetic transition in L a1 -xC exF e12B6 : Time delay of the metamagnetic transitions

    NASA Astrophysics Data System (ADS)

    Diop, L. V. B.; Isnard, O.

    2018-01-01

    The effects of cerium substitution on the structural and magnetic properties of the L a1 -xC exF e12B6 (0 ≤x ≤0.175 ) series of compounds have been studied. All of the compounds exhibit an antiferromagnetic ground state below the Néel temperature TN≈36 K . Both antiferromagnetic and paramagnetic states can be transformed into the ferromagnetic state irreversibly and reversibly depending on the magnitude of the applied magnetic field, the temperature, and the direction of their changes. Of particular interest is the low-temperature magnetization process. This process is discontinuous and evolves unexpected huge metamagnetic transitions consisting of a succession of sharp magnetization steps separated by plateaus, giving rise to an unusual avalanchelike behavior. At constant temperature and magnetic field, the evolution with time of the magnetization displays a spectacular spontaneous jump after a long incubation time. L a1 -xC exF e12B6 compounds exhibit a unique combination of exceptional features like large thermal hysteresis, giant magnetization jumps, and remarkably huge magnetic hysteresis for the field-induced first-order metamagnetic transition.

  6. Reduced-order aeroelastic model for limit-cycle oscillations in vortex-dominated unsteady airfoil flows

    NASA Astrophysics Data System (ADS)

    Suresh Babu, Arun Vishnu; Ramesh, Kiran; Gopalarathnam, Ashok

    2017-11-01

    In previous research, Ramesh et al. (JFM,2014) developed a low-order discrete vortex method for modeling unsteady airfoil flows with intermittent leading edge vortex (LEV) shedding using a leading edge suction parameter (LESP). LEV shedding is initiated using discrete vortices (DVs) whenever the Leading Edge Suction Parameter (LESP) exceeds a critical value. In subsequent research, the method was successfully employed by Ramesh et al. (JFS, 2015) to predict aeroelastic limit-cycle oscillations in airfoil flows dominated by intermittent LEV shedding. When applied to flows that require large number of time steps, the computational cost increases due to the increasing vortex count. In this research, we apply an amalgamation strategy to actively control the DV count, and thereby reduce simulation time. A pair each of LEVs and TEVs are amalgamated at every time step. The ideal pairs for amalgamation are identified based on the requirement that the flowfield in the vicinity of the airfoil is least affected (Spalart, 1988). Instead of placing the amalgamated vortex at the centroid, we place it at an optimal location to ensure that the leading-edge suction and the airfoil bound circulation are conserved. Results of the initial study are promising.

  7. Molecular simulation of small Knudsen number flows

    NASA Astrophysics Data System (ADS)

    Fei, Fei; Fan, Jing

    2012-11-01

    The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.

  8. An exact and efficient first passage time algorithm for reaction-diffusion processes on a 2D-lattice

    NASA Astrophysics Data System (ADS)

    Bezzola, Andri; Bales, Benjamin B.; Alkire, Richard C.; Petzold, Linda R.

    2014-01-01

    We present an exact and efficient algorithm for reaction-diffusion-nucleation processes on a 2D-lattice. The algorithm makes use of first passage time (FPT) to replace the computationally intensive simulation of diffusion hops in KMC by larger jumps when particles are far away from step-edges or other particles. Our approach computes exact probability distributions of jump times and target locations in a closed-form formula, based on the eigenvectors and eigenvalues of the corresponding 1D transition matrix, maintaining atomic-scale resolution of resulting shapes of deposit islands. We have applied our method to three different test cases of electrodeposition: pure diffusional aggregation for large ranges of diffusivity rates and for simulation domain sizes of up to 4096×4096 sites, the effect of diffusivity on island shapes and sizes in combination with a KMC edge diffusion, and the calculation of an exclusion zone in front of a step-edge, confirming statistical equivalence to standard KMC simulations. The algorithm achieves significant speedup compared to standard KMC for cases where particles diffuse over long distances before nucleating with other particles or being captured by larger islands.

  9. Implicit flux-split Euler schemes for unsteady aerodynamic analysis involving unstructured dynamic meshes

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1990-01-01

    Improved algorithm for the solution of the time-dependent Euler equations are presented for unsteady aerodynamic analysis involving unstructured dynamic meshes. The improvements were developed recently to the spatial and temporal discretizations used by unstructured grid flow solvers. The spatial discretization involves a flux-split approach which is naturally dissipative and captures shock waves sharply with at most one grid point within the shock structure. The temporal discretization involves an implicit time-integration scheme using a Gauss-Seidel relaxation procedure which is computationally efficient for either steady or unsteady flow problems. For example, very large time steps may be used for rapid convergence to steady state, and the step size for unsteady cases may be selected for temporal accuracy rather than for numerical stability. Steady and unsteady flow results are presented for the NACA 0012 airfoil to demonstrate applications of the new Euler solvers. The unsteady results were obtained for the airfoil pitching harmonically about the quarter chord. The resulting instantaneous pressure distributions and lift and moment coefficients during a cycle of motion compare well with experimental data. A description of the Euler solvers is presented along with results and comparisons which assess the capability.

  10. Four decades of implicit Monte Carlo

    DOE PAGES

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  11. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.

  12. Improving efficiency and safety in external beam radiation therapy treatment delivery using a Kaizen approach.

    PubMed

    Kapur, Ajay; Adair, Nilda; O'Brien, Mildred; Naparstek, Nikoleta; Cangelosi, Thomas; Zuvic, Petrina; Joseph, Sherin; Meier, Jason; Bloom, Beatrice; Potters, Louis

    Modern external beam radiation therapy treatment delivery processes potentially increase the number of tasks to be performed by therapists and thus opportunities for errors, yet the need to treat a large number of patients daily requires a balanced allocation of time per treatment slot. The goal of this work was to streamline the underlying workflow in such time-interval constrained processes to enhance both execution efficiency and active safety surveillance using a Kaizen approach. A Kaizen project was initiated by mapping the workflow within each treatment slot for 3 Varian TrueBeam linear accelerators. More than 90 steps were identified, and average execution times for each were measured. The time-consuming steps were stratified into a 2 × 2 matrix arranged by potential workflow improvement versus the level of corrective effort required. A work plan was created to launch initiatives with high potential for workflow improvement but modest effort to implement. Time spent on safety surveillance and average durations of treatment slots were used to assess corresponding workflow improvements. Three initiatives were implemented to mitigate unnecessary therapist motion, overprocessing of data, and wait time for data transfer defects, respectively. A fourth initiative was implemented to make the division of labor by treating therapists as well as peer review more explicit. The average duration of treatment slots reduced by 6.7% in the 9 months following implementation of the initiatives (P = .001). A reduction of 21% in duration of treatment slots was observed on 1 of the machines (P < .001). Time spent on safety reviews remained the same (20% of the allocated interval), but the peer review component increased. The Kaizen approach has the potential to improve operational efficiency and safety with quick turnaround in radiation therapy practice by addressing non-value-adding steps characteristic of individual department workflows. Higher effort opportunities are identified to guide continual downstream quality improvements. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  13. Girls' physical activity levels during organized sports in Australia.

    PubMed

    Guagliano, Justin M; Rosenkranz, Richard R; Kolt, Gregory S

    2013-01-01

    The primary aim of this study was to objectively examine the physical activity (PA) levels of girls during organized sports (OS) and to compare the levels between games and practices for the same participants. The secondary aims of this study were to document lesson context and coach behavior during practices and games. Participants were 94 girls recruited from 10 teams in three OS (netball, basketball, and soccer) from the western suburbs of Sydney. Each participant wore an ActiGraph GT3X monitor for the duration of one practice and one game. The System for Observing Fitness Instruction Time was concurrently used to document lesson context and coach behavior. Girls spent a significantly higher percentage of time in moderate-to-vigorous PA (MVPA) during practices compared with games (33.8% vs 30.6%; t = 2.94, P < 0.05). Girls spent approximately 20 min·h(-1) in MVPA during practices and approximately 18 min·h(-1) in MVPA during games. An average of 2957 and 2702 steps per hour were accumulated during practices and games, respectively. However, girls spent roughly two-thirds of their OS time in light PA or sedentary. On the basis of the System for Observing Fitness Instruction Time findings, coaches spent a large proportion of practice time in management (15.0%) and knowledge delivery (18.5%). An average of 13.0 and 15.8 occurrences per hour were observed during games and practices where coaches promoted PA. For every hour of game play or practice time, girls accumulated approximately one third of the recommended 60 min of MVPA time and approximately one quarter of the 12,000 steps that girls are recommended to accumulate daily. For this population, OS seems to make a substantial contribution to the recommended amounts of MVPA and steps for participating girls. OS alone, however, does not provide amounts of PA sufficient to meet daily recommendations for adolescent girls.

  14. "Stepped care": a health technology solution for delivering cognitive behavioral therapy as a first line insomnia treatment.

    PubMed

    Espie, Colin A

    2009-12-01

    There is a large body of evidence that Cognitive Behavioral Therapy for insomnia (CBT) is an effective treatment for persistent insomnia. However, despite two decades of research it is still not readily available, and there are no immediate signs that this situation is about to change. This paper proposes that a service delivery model, based on "stepped care" principles, would enable this relatively scarce healthcare expertise to be applied in a cost-effective way to achieve optimal development of CBT services and best clinical care. The research evidence on methods of delivering CBT, and the associated clinical leadership roles, is reviewed. On this basis, self-administered CBT is posited as the "entry level" treatment for stepped care, with manualized, small group, CBT delivered by nurses, at the next level. Overall, a hierarchy comprising five levels of CBT stepped care is suggested. Allocation to a particular level should reflect assessed need, which in turn represents increased resource requirement in terms of time, cost and expertise. Stepped care models must also be capable of "referring" people upstream where there is an incomplete therapeutic response to a lower level intervention. Ultimately, the challenge is for CBT to be delivered competently and effectively in diversified formats on a whole population basis. That is, it needs to become "scalable". This will require a robust approach to clinical governance.

  15. Developing stepped care treatment for depression (STEPS): study protocol for a pilot randomised controlled trial.

    PubMed

    Hill, Jacqueline J; Kuyken, Willem; Richards, David A

    2014-11-20

    Stepped care is recommended and implemented as a means to organise depression treatment. Compared with alternative systems, it is assumed to achieve equivalent clinical effects and greater efficiency. However, no trials have examined these assumptions. A fully powered trial of stepped care compared with intensive psychological therapy is required but a number of methodological and procedural uncertainties associated with the conduct of a large trial need to be addressed first. STEPS (Developing stepped care treatment for depression) is a mixed methods study to address uncertainties associated with a large-scale evaluation of stepped care compared with high-intensity psychological therapy alone for the treatment of depression. We will conduct a pilot randomised controlled trial with an embedded process study. Quantitative trial data on recruitment, retention and the pathway of patients through treatment will be used to assess feasibility. Outcome data on the effects of stepped care compared with high-intensity therapy alone will inform a sample size calculation for a definitive trial. Qualitative interviews will be undertaken to explore what people think of our trial methods and procedures and the stepped care intervention. A minimum of 60 patients with Major Depressive Disorder will be recruited from an Improving Access to Psychological Therapies service and randomly allocated to receive stepped care or intensive psychological therapy alone. All treatments will be delivered at clinic facilities within the University of Exeter. Quantitative patient-related data on depressive symptoms, worry and anxiety and quality of life will be collected at baseline and 6 months. The pilot trial and interviews will be undertaken concurrently. Quantitative and qualitative data will be analysed separately and then integrated. The outcomes of this study will inform the design of a fully powered randomised controlled trial to evaluate the effectiveness and efficiency of stepped care. Qualitative data on stepped care will be of immediate interest to patients, clinicians, service managers, policy makers and guideline developers. A more informed understanding of the feasibility of a large trial will be obtained than would be possible from a purely quantitative (or qualitative) design. Current Controlled Trials ISRCTN66346646 registered on 2 July 2014.

  16. Cross-current leaching of indium from end-of-life LCD panels.

    PubMed

    Rocchetti, Laura; Amato, Alessia; Fonti, Viviana; Ubaldini, Stefano; De Michelis, Ida; Kopacek, Bernd; Vegliò, Francesco; Beolchini, Francesca

    2015-08-01

    Indium is a critical element mainly produced as a by-product of zinc mining, and it is largely used in the production process of liquid crystal display (LCD) panels. End-of-life LCDs represent a possible source of indium in the field of urban mining. In the present paper, we apply, for the first time, cross-current leaching to mobilize indium from end-of-life LCD panels. We carried out a series of treatments to leach indium. The best leaching conditions for indium were 2M sulfuric acid at 80°C for 10min, which allowed us to completely mobilize indium. Taking into account the low content of indium in end-of-life LCDs, of about 100ppm, a single step of leaching is not cost-effective. We tested 6 steps of cross-current leaching: in the first step indium leaching was complete, whereas in the second step it was in the range of 85-90%, and with 6 steps it was about 50-55%. Indium concentration in the leachate was about 35mg/L after the first step of leaching, almost 2-fold at the second step and about 3-fold at the fifth step. Then, we hypothesized to scale up the process of cross-current leaching up to 10 steps, followed by cementation with zinc to recover indium. In this simulation, the process of indium recovery was advantageous from an economic and environmental point of view. Indeed, cross-current leaching allowed to concentrate indium, save reagents, and reduce the emission of CO2 (with 10 steps we assessed that the emission of about 90kg CO2-Eq. could be avoided) thanks to the recovery of indium. This new strategy represents a useful approach for secondary production of indium from waste LCD panels. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Time series analysis of Mexico City subsidence constrained by radar interferometry

    NASA Astrophysics Data System (ADS)

    Doin, Marie-Pierre; Lopez-Quiroz, Penelope; Yan, Yajing; Bascou, Pascale; Pinel, Virginie

    2010-05-01

    In Mexico City, subsidence rates reach up to 40 cm/yr mainly due to soil compaction led by the over exploitation of the Mexico Basin aquifer. The Mexico Valley, an endoreic basin surrounded by mountains, was in the past covered by large lakes. After the Spanish conquest, the lakes have almost completely disappeared, being progressively replaced by buildings of the current Mexican capital. The simplified hydrogeologic structure includes a superficial 50 to 300 m thick lacustrine aquitard overlying a thicker aquifer made of alluvial deposits. The aquitard layer plays a crucial role in the subsidence process due to the extremely high compressibility of its clay deposits separated by a less compressible sand layer where the biggest buildings of the city are anchored. The aquifer over-exploitation leads to a large scale 30m depression of its piezometric level, inducing water downwards flow in the clays, yielding compaction and subsidence. In order to quantitatively link subsidence to water pumping, the Mexico city subsidence needs to be mapped and analyzed through space and time. We map its spatial and temporal patterns by differential radar interferometry, using 38 ENVISAT images acquired between end of 2002 and beginning of 2007. We employ both a Permanent Scatterer (PS) and a small baseline (SBAS) approach. The main difficulty consists in the severe unwrapping problems mostly due to the high deformation rate. We develop a specific SBAS approach based on 71 differential interferograms with a perpendicular baseline smaller than 500 m and a temporal baseline smaller than 9 months, forming a redundant network linking all images: (1) To help the unwrapping step, we use the fact that the deformation shape is stable for similar time intervals during the studied period. As a result, a stack of the five best interferograms can be used to reduce the number of fringes in wrapped interferograms. (2) Based on the redundancy of the interferometric data base, we quantify the unwrapping errors for each pixel and show that they are strongly decreased by iterations in the unwrapping process. (3) Finally, we present a new algorithm for time series analysis that differs from classical SVD decomposition and is best suited to the present data base. Accurate deformation time series are then derived over the metropolitan area of the city with a spatial resolution of 30 × 30 m. We also use the Gamma-PS software on the same data set. The phase differences are unwrapped within small patches with respect to a reference point chosen in each patch, whose phase is in turn unwrapped relatively to a reference point common for the whole area of interest. After removing the modelled contribution of the linear displacement rate and DEM error, some residual interferograms, presenting unwrapping errors because of strong residual orbital ramp or atmospheric phase screen, are spatially unwrapped by a minimum cost-flow algorithm. The next steps are to estimate and remove the residual orbital ramp and to apply temporal low-pass filter to remove atmospheric contributions. The step by step comparison of the SBAS and PS approaches shows both methods complementarity. The SBAS analysis provide subsidence rates with an accuracy of a mm/yr over the whole basin in a large area, together with the subsidence non linear behavior through time, however at the expense of some spatial regularization. The PS method provides locally accurate and punctual deformation rates, but fails in this case to yield a good large scale map and the non linear temporal behavior of the subsidence. We conclude that the relative contrast in subsidence between individual buildings and infrastructure must be relatively small, on average of the order of 5mm/yr.

  18. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  19. The stepping behavior analysis of pedestrians from different age groups via a single-file experiment

    NASA Astrophysics Data System (ADS)

    Cao, Shuchao; Zhang, Jun; Song, Weiguo; Shi, Chang'an; Zhang, Ruifang

    2018-03-01

    The stepping behavior of pedestrians with different age compositions in single-file experiment is investigated in this paper. The relation between step length, step width and stepping time are analyzed by using the step measurement method based on the calculation of curvature of the trajectory. The relations of velocity-step width, velocity-step length and velocity-stepping time for different age groups are discussed and compared with previous studies. Finally effects of pedestrian gender and height on stepping laws and fundamental diagrams are analyzed. The study is helpful for understanding pedestrian dynamics of movement. Meanwhile, it offers experimental data to develop a microscopic model of pedestrian movement by considering stepping behavior.

  20. An Advanced simulation Code for Modeling Inductive Output Tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing currentmore » density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.« less

Top