Adaptive time steps in trajectory surface hopping simulations.
Spörkel, Lasse; Thiel, Walter
2016-05-21
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling. PMID:27208937
Adaptive time steps in trajectory surface hopping simulations
NASA Astrophysics Data System (ADS)
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
Automatic multirate methods for ordinary differential equations. [Adaptive time steps
Gear, C.W.
1980-01-01
A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.
An adaptive time-stepping strategy for solving the phase field crystal model
Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.
An Adaptive Fourier Filter for Relaxing Time Stepping Constraints for Explicit Solvers
Gelb, Anne; Archibald, Richard K
2015-01-01
Filtering is necessary to stabilize piecewise smooth solutions. The resulting diffusion stabilizes the method, but may fail to resolve the solution near discontinuities. Moreover, high order filtering still requires cost prohibitive time stepping. This paper introduces an adaptive filter that controls spurious modes of the solution, but is not unnecessarily diffusive. Consequently we are able to stabilize the solution with larger time steps, but also take advantage of the accuracy of a high order filter.
NASA Astrophysics Data System (ADS)
Hirthe, E. M.; Graf, T.
2012-04-01
Fluid density variations occur due to changes in the solute concentration, temperature and pressure of groundwater. Examples are interaction between freshwater and seawater, radioactive waste disposal, groundwater contamination, and geothermal energy production. The physical coupling between flow and transport introduces non-linearity in the governing mathematical equations, such that solving variable-density flow problems typically requires very long computational time. Computational efficiency can be attained through the use of adaptive time-stepping schemes. The aim of this work is therefore to apply a non-iterative adaptive time-stepping scheme based on local truncation error in variable-density flow problems. That new scheme is implemented into the code of the HydroGeoSphere model (Therrien et al., 2011). The new time-stepping scheme is applied to the Elder (1967) and the Shikaze et al. (1998) problem of free convection in porous and fractured-porous media, respectively. Numerical simulations demonstrate that non-iterative time-stepping based on local truncation error control fully automates the time step size and efficiently limits the temporal discretization error to the user-defined tolerance. Results of the Elder problem show that the new time-stepping scheme presented here is significantly more efficient than uniform time-stepping when high accuracy is required. Results of the Shikaze problem reveal that the new scheme is considerably faster than conventional time-stepping where time step sizes are either constant or controlled by absolute head/concentration changes. Future research will focus on the application of the new time-stepping scheme to variable-density flow in complex real-world fractured-porous rock.
Adaptive time stepping algorithm for Lagrangian transport models: Theory and idealised test cases
NASA Astrophysics Data System (ADS)
Shah, Syed Hyder Ali Muttaqi; Heemink, Arnold Willem; Gräwe, Ulf; Deleersnijder, Eric
2013-08-01
Random walk simulations have an excellent potential in marine and oceanic modelling. This is essentially due to their relative simplicity and their ability to represent advective transport without being plagued by the deficiencies of the Eulerian methods. The physical and mathematical foundations of random walk modelling of turbulent diffusion have become solid over the years. Random walk models rest on the theory of stochastic differential equations. Unfortunately, the latter and the related numerical aspects have not attracted much attention in the oceanic modelling community. The main goal of this paper is to help bridge the gap by developing an efficient adaptive time stepping algorithm for random walk models. Its performance is examined on two idealised test cases of turbulent dispersion; (i) pycnocline crossing and (ii) non-flat isopycnal diffusion, which are inspired by shallow-sea dynamics and large-scale ocean transport processes, respectively. The numerical results of the adaptive time stepping algorithm are compared with the fixed-time increment Milstein scheme, showing that the adaptive time stepping algorithm for Lagrangian random walk models is more efficient than its fixed step-size counterpart without any loss in accuracy.
Exponential time-differencing with embedded Runge–Kutta adaptive step control
Whalen, P.; Brio, M.; Moloney, J.V.
2015-01-01
We have presented the first embedded Runge–Kutta exponential time-differencing (RKETD) methods of fourth order with third order embedding and fifth order with third order embedding for non-Rosenbrock type nonlinear systems. A procedure for constructing RKETD methods that accounts for both order conditions and stability is outlined. In our stability analysis, the fast time scale is represented by a full linear operator in contrast to particular scalar cases considered before. An effective time-stepping strategy based on reducing both ETD function evaluations and rejected steps is described. Comparisons of performance with adaptive-stepping integrating factor (IF) are carried out on a set of canonical partial differential equations: the shock-fronts of Burgers equation, interacting KdV solitons, KS controlled chaos, and critical collapse of two-dimensional NLS.
Multi time-step wavefront reconstruction for tomographic adaptive-optics systems.
Ono, Yoshito H; Akiyama, Masayuki; Oya, Shin; Lardiére, Olivier; Andersen, David R; Correia, Carlos; Jackson, Kate; Bradley, Colin
2016-04-01
In tomographic adaptive-optics (AO) systems, errors due to tomographic wavefront reconstruction limit the performance and angular size of the scientific field of view (FoV), where AO correction is effective. We propose a multi time-step tomographic wavefront reconstruction method to reduce the tomographic error by using measurements from both the current and previous time steps simultaneously. We further outline the method to feed the reconstructor with both wind speed and direction of each turbulence layer. An end-to-end numerical simulation, assuming a multi-object AO (MOAO) system on a 30 m aperture telescope, shows that the multi time-step reconstruction increases the Strehl ratio (SR) over a scientific FoV of 10 arc min in diameter by a factor of 1.5-1.8 when compared to the classical tomographic reconstructor, depending on the guide star asterism and with perfect knowledge of wind speeds and directions. We also evaluate the multi time-step reconstruction method and the wind estimation method on the RAVEN demonstrator under laboratory setting conditions. The wind speeds and directions at multiple atmospheric layers are measured successfully in the laboratory experiment by our wind estimation method with errors below 2 ms^{-1}. With these wind estimates, the multi time-step reconstructor increases the SR value by a factor of 1.2-1.5, which is consistent with a prediction from the end-to-end numerical simulation. PMID:27140785
Tremblay, Jean Christophe; Carrington, Tucker Jr.
2004-12-15
If the Hamiltonian is time dependent it is common to solve the time-dependent Schroedinger equation by dividing the propagation interval into slices and using an (e.g., split operator, Chebyshev, Lanczos) approximate matrix exponential within each slice. We show that a preconditioned adaptive step size Runge-Kutta method can be much more efficient. For a chirped laser pulse designed to favor the dissociation of HF the preconditioned adaptive step size Runge-Kutta method is about an order of magnitude more efficient than the time sliced method.
Numerical simulation of diffusion MRI signals using an adaptive time-stepping method.
Li, Jing-Rebecca; Calhoun, Donna; Poupon, Cyril; Le Bihan, Denis
2014-01-20
The effect on the MRI signal of water diffusion in biological tissues in the presence of applied magnetic field gradient pulses can be modelled by a multiple compartment Bloch-Torrey partial differential equation. We present a method for the numerical solution of this equation by coupling a standard Cartesian spatial discretization with an adaptive time discretization. The time discretization is done using the explicit Runge-Kutta-Chebyshev method, which is more efficient than the forward Euler time discretization for diffusive-type problems. We use this approach to simulate the diffusion MRI signal from the extra-cylindrical compartment in a tissue model of the brain gray matter consisting of cylindrical and spherical cells and illustrate the effect of cell membrane permeability. PMID:24351275
Numerical simulation of diffusion MRI signals using an adaptive time-stepping method
NASA Astrophysics Data System (ADS)
Li, Jing-Rebecca; Calhoun, Donna; Poupon, Cyril; Le Bihan, Denis
2014-01-01
The effect on the MRI signal of water diffusion in biological tissues in the presence of applied magnetic field gradient pulses can be modelled by a multiple compartment Bloch-Torrey partial differential equation. We present a method for the numerical solution of this equation by coupling a standard Cartesian spatial discretization with an adaptive time discretization. The time discretization is done using the explicit Runge-Kutta-Chebyshev method, which is more efficient than the forward Euler time discretization for diffusive-type problems. We use this approach to simulate the diffusion MRI signal from the extra-cylindrical compartment in a tissue model of the brain gray matter consisting of cylindrical and spherical cells and illustrate the effect of cell membrane permeability.
NASA Astrophysics Data System (ADS)
Hirthe, Eugenia M.; Graf, Thomas
2012-12-01
The automatic non-iterative second-order time-stepping scheme based on the temporal truncation error proposed by Kavetski et al. [Kavetski D, Binning P, Sloan SW. Non-iterative time-stepping schemes with adaptive truncation error control for the solution of Richards equation. Water Resour Res 2002;38(10):1211, http://dx.doi.org/10.1029/2001WR000720.] is implemented into the code of the HydroGeoSphere model. This time-stepping scheme is applied for the first time to the low-Rayleigh-number thermal Elder problem of free convection in porous media [van Reeuwijk M, Mathias SA, Simmons CT, Ward JD. Insights from a pseudospectral approach to the Elder problem. Water Resour Res 2009;45:W04416, http://dx.doi.org/10.1029/2008WR007421.], and to the solutal [Shikaze SG, Sudicky EA, Schwartz FW. Density-dependent solute transport in discretely-fractured geological media: is prediction possible? J Contam Hydrol 1998;34:273-91] problem of free convection in fractured-porous media. Numerical simulations demonstrate that the proposed scheme efficiently limits the temporal truncation error to a user-defined tolerance by controlling the time-step size. The non-iterative second-order time-stepping scheme can be applied to (i) thermal and solutal variable-density flow problems, (ii) linear and non-linear density functions, and (iii) problems including porous and fractured-porous media.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
The ESO adaptive optics real-time computer platform: a step toward the future
NASA Astrophysics Data System (ADS)
Fedrigo, Enrico; Donaldson, Robert; Soenke, Christian; Hubin, Norbert N.
2004-10-01
ESO now operates several AO systems in the Paranal observatory. Most of them are the outcome of different and independent efforts resulting in different and incompatible systems with all the problems of maintaining and evolving them. At the same time, industry is now proposing powerful embedded computers and new standard technologies that enable the construction of massive real time parallel computers, with a technology roadmap that looks extremely promising. The ESO AO Platform initiative aims at taking this unique opportunity of gathering all the experience accumulated so far in building and operating AO system and the recent advances offered by the industry to define and build a standard hardware and software platform able to run every AO system of the near future of the VLT with an eye towards OWL. We review the key technologies that enable the design of a common AO-RTC and we discuss the main choices of the AO Platform initiative.
NASA Astrophysics Data System (ADS)
Zhou, Di; Zhang, Yong-An; Duan, Guang-Ren
The two-step filter has been combined with a modified Sage-Husa time-varying measurement noise statistical estimator, which is able to estimate the covariance of measurement noise on line, to generate an adaptive two-step filter. In many practical applications such as the bearings-only guidance, some model parameters and the process noise covariance are also unknown a priori. Based on the adaptive two-step filter, we utilize multiple models in the first-step filtering as well as in the time update of the second-step filtering to handle the uncertainties of model parameters and process noise covariance. In each timestep of the multiple model filtering, probabilistic weights punishing the estimates of first-step state from different models, and their associated covariance matrices are acquired according to Bayes’ rule. The weighted sum of the estimates of first-step state and that of the associated covariance matrices are extracted as the ultimate estimate and covariance of the first-step state, and are used as measurement information for the measurement update of the second-step state. Thus there is still only one iteration process and no apparent enhancement of computation burden. A motion tracking sliding-mode guidance law is presented for missiles with non-negligible delays in actual acceleration. This guidance law guarantees guidance accuracy and is able to enhance observability in bearings-only tracking. In bearings-only cases, the multiple model adaptive two-step filter is applied to the motion tracking sliding-mode guidance law, supplying relative range, relative velocity, and target acceleration information. In simulation experiments satisfactory filtering and guidance results are obtained, even if the filter runs into unknown target maneuvers and unknown time-varying measurement noise covariance, and the guidance law has to deal with a large time lag in acceleration.
Toggweiler, Matthias; Adelmann, Andreas; Arbenz, Peter; Yang, Jianjun
2014-09-15
We show that adaptive time stepping in particle accelerator simulation is an enhancement for certain problems. The new algorithm has been implemented in the OPAL (Object Oriented Parallel Accelerator Library) framework. The idea is to adjust the frequency of costly self-field calculations, which are needed to model Coulomb interaction (space charge) effects. In analogy to a Kepler orbit simulation that requires a higher time step resolution at the close encounter, we propose to choose the time step based on the magnitude of the space charge forces. Inspired by geometric integration techniques, our algorithm chooses the time step proportional to a function of the current phase space state instead of calculating a local error estimate like a conventional adaptive procedure. Building on recent work, a more profound argument is given on how exactly the time step should be chosen. An intermediate algorithm, initially built to allow a clearer analysis by introducing separate time steps for external field and self-field integration, turned out to be useful by its own, for a large class of problems.
Finite-difference modeling with variable grid-size and adaptive time-step in porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Wu, Guochen
2014-04-01
Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.
Continuous-time adaptive critics.
Hanselmann, Thomas; Noakes, Lyle; Zaknich, Anthony
2007-05-01
A continuous-time formulation of an adaptive critic design (ACD) is investigated. Connections to the discrete case are made, where backpropagation through time (BPTT) and real-time recurrent learning (RTRL) are prevalent. Practical benefits are that this framework fits in well with plant descriptions given by differential equations and that any standard integration routine with adaptive step-size does an adaptive sampling for free. A second-order actor adaptation using Newton's method is established for fast actor convergence for a general plant and critic. Also, a fast critic update for concurrent actor-critic training is introduced to immediately apply necessary adjustments of critic parameters induced by actor updates to keep the Bellman optimality correct to first-order approximation after actor changes. Thus, critic and actor updates may be performed at the same time until some substantial error build up in the Bellman optimality or temporal difference equation, when a traditional critic training needs to be performed and then another interval of concurrent actor-critic training may resume. PMID:17526332
Grief: Difficult Times, Simple Steps.
ERIC Educational Resources Information Center
Waszak, Emily Lane
This guide presents techniques to assist others in coping with the loss of a loved one. Using the language of 9 layperson, the book contains more than 100 tips for caregivers or loved ones. A simple step is presented on each page, followed by reasons and instructions for each step. Chapters include: "What to Say"; "Helpful Things to Do"; "Dealing…
Secondary tasks impair adaptation to step and gradual visual displacements
Galea, J.M.; Sami, S.; Albert, N.B.; Miall, R.C.
2016-01-01
Performing two competing tasks can result in dividing cognitive resources between the tasks and impaired motor adaptation. In previous work we have reported impaired learning when participants had to switch from one visual displacement adaptation task to another. Here we examined whether or not a secondary task had a similar effect on adaptation to a visual displacement . The resource dividing task involved simultaneously adapting to a step visual displacement whilst vocally shadowing an auditory stimulus . The switching task required participants to adapt to opposing visual displacements in an alternating manner with the left and right hands. We found that both manipulations had a detrimental effect on adaptation rate. We then integrated these tasks and found the combination caused a greater decrease in adaptation rate than either manipulation in isolation. Experiment 2 showed that adaptation to a gradually imposed visual displacement was influenced in a similar manner to step adaptation. Therefore although gradual adaptation involves minimal awareness it still can be disrupted by a cognitively demanding secondary task. We propose that awareness and cognitive resource can be regarded as qualitatively different but that awareness may be a marker of the amount of resource required. For example, large errors are both noticed and require substantial cognitive resource to connect. However a lack of awareness does not mean an adaptation task will be resistant to interference from a resource consuming secondary task. PMID:20101396
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2015-04-01
We investigate the operational utility of fine time step hydro-climatic information using a large catchment data set. The originality of this data set lies in the availability of precipitation data from the 6-minute rain gauges of Météo-France, and in the size of the catchment set (217 French catchments in total). The rainfall-runoff model used (GR4) has been adapted to hourly and sub-hourly time steps (up to 6-minute) from the daily time step version (Perrin et al., 2003). The model is applied at different time steps ranging from 6-minute to 1 day (6-, 12-, 30-minute, 1-, 3-, 6-, 12-hour and 1 day) and the evolution of model performance for each catchment is evaluated at the daily time step by aggregation of model outputs. Three classes of behavior are found according to the trend of model performance as the time step becomes finer: (i) catchments presenting an improvement of model performance; (ii) catchments with a model performance insensitive to the time step; (iii) catchments for which the performance even deteriorates as the time step becomes finer. The reasons behind these different trends are investigated from a hydrological point of view, by relating the model sensitivity to data at finer time step to catchment descriptors. References: Perrin, C., C. Michel and V. Andréassian (2003), "Improvement of a parsimonious model for streamflow simulation", Journal of Hydrology, 279(1-4): 275-289.
Simulating system dynamics with arbitrary time step
NASA Astrophysics Data System (ADS)
Kantorovich, L.
2007-02-01
We suggest a dynamic simulation method that allows efficient and realistic modeling of kinetic processes, such as atomic diffusion, in which time has its actual meaning. Our method is similar in spirit to widely used kinetic Monte Carlo (KMC) techniques; however, in our approach, the time step can be chosen arbitrarily. This has an advantage in some cases, e.g., when the transition rates change with time sufficiently fast over the period of the KMC time step (e.g., due to time dependence of some external factors influencing kinetics such as moving scanning probe microscopy tip or external time-dependent field) or when the clock time is set by some external conditions, and it is convenient to use equal time steps instead of the random choice of the KMC algorithm in order to build up probability distribution functions. We show that an arbitrary choice of the time step can be afforded by building up the complete list of events including the “residence site” and multihop transitions. The idea of the method is illustrated in a simple “toy” model of a finite one-dimensional lattice of potential wells with unequal jump rates to either side, which can be studied analytically. We show that for any choice of the time step, our general kinetics method reproduces exactly the solution of the corresponding master equations for any choice of the time steps. The final kinetics also matches the standard KMC, and this allows better understanding of this algorithm, in which the time step is chosen in a certain way and the system always advances by a single hop.
Hagler, Kylee J; Rice, Samara L; Muñoz, Rosa E; Salvador, Julie G; Forcehimes, Alyssa A; Bogenschutz, Michael P
2015-01-01
Most U.S. healthcare professionals encourage mutual-help group involvement as an adjunct to treatment or aftercare for individuals with substance use disorders, yet there are multiple challenges in engaging in these community groups. Dually diagnosed individuals (DDIs) may face additional challenges in affiliating with mutual-help groups. Twelve-step facilitation for DDIs (TSF-DD), a manualized treatment to facilitate mutual-help group involvement, was developed to help patients engage in Double Trouble in Recovery (DTR), a mutual-help group tailored to DDIs. Given the promising role that TSF-DD and DTR may have for increasing abstinence while managing psychiatric symptoms, the aim of the current study was to systematically examine reasons for TSF-DD and DTR attendance from the perspective of DDIs using focus group data. Participants were a subset (n = 15) of individuals diagnosed with an alcohol use disorder as well as a major depressive, bipolar, or psychotic disorder who participated in a parent study testing the efficacy of TSF-DD for increasing mutual-help group involvement and reducing alcohol use. Analyses of focus group data revealed that participants construed DTR and TSF-DD as helpful tools in the understanding and management of their disorders. Relative to other mutual-help groups in which participants reported feeling ostracized because of their dual diagnoses, participants reported that it was beneficial to learn about dual disorders in a safe and accepting environment. Participants also expressed aspects that they disliked. Results from this study yield helpful empirical recommendations to healthcare professionals seeking to increase DDIs' participation in DTR or other mutual-help groups. PMID:26340570
Large step structure measurement by using white light interferometry based on adaptive scanning
NASA Astrophysics Data System (ADS)
Bian, Yan; Guo, Tong; Li, Feng; Wang, Siming; Fu, Xing; Hu, Xiaotang
2013-01-01
As an important measuring technique, white light scanning interferometry can realize non-contact, fast and high accurate measurement. However, when measuring the large step structure, the white light scanning interferometry has the problems of long time consumption and low signal utilization. In this paper, a kind of adaptive scanning technique is proposed to measure the large step structure to improve its efficiency. This technique can be realized in two ways-the pre-configuration mode and the auto-focusing mode. During the scanning process, the image collection is limited within the coherence area, and in other positions, the motion is speeded up. The adaptive scanning is driven by the nano-measuring machine (NMM) which reaches nanometer accuracy and is controlled by the measurement software. The testing result of 100μm step height shows that the adaptive scanning can improve the measuring efficiency dramatically compared with conventional fixed-step scanning and it keeps the same high accuracy.
Extrapolated implicit-explicit time stepping.
Constantinescu, E. M.; Sandu, A.; Mathematics and Computer Science; Virginia Polytechnic Inst. and State Univ.
2010-01-01
This paper constructs extrapolated implicit-explicit time stepping methods that allow one to efficiently solve problems with both stiff and nonstiff components. The proposed methods are based on Euler steps and can provide very high order discretizations of ODEs, index-1 DAEs, and PDEs in the method-of-lines framework. Implicit-explicit schemes based on extrapolation are simple to construct, easy to implement, and straightforward to parallelize. This work establishes the existence of perturbed asymptotic expansions of global errors, explains the convergence orders of these methods, and studies their linear stability properties. Numerical results with stiff ODE, DAE, and PDE test problems confirm the theoretical findings and illustrate the potential of these methods to solve multiphysics multiscale problems.
Sensory adaptation for timing perception
Roseboom, Warrick; Linares, Daniel; Nishida, Shin'ya
2015-01-01
Recent sensory experience modifies subjective timing perception. For example, when visual events repeatedly lead auditory events, such as when the sound and video tracks of a movie are out of sync, subsequent vision-leads-audio presentations are reported as more simultaneous. This phenomenon could provide insights into the fundamental problem of how timing is represented in the brain, but the underlying mechanisms are poorly understood. Here, we show that the effect of recent experience on timing perception is not just subjective; recent sensory experience also modifies relative timing discrimination. This result indicates that recent sensory history alters the encoding of relative timing in sensory areas, excluding explanations of the subjective phenomenon based only on decision-level changes. The pattern of changes in timing discrimination suggests the existence of two sensory components, similar to those previously reported for visual spatial attributes: a lateral shift in the nonlinear transducer that maps relative timing into perceptual relative timing and an increase in transducer slope around the exposed timing. The existence of these components would suggest that previous explanations of how recent experience may change the sensory encoding of timing, such as changes in sensory latencies or simple implementations of neural population codes, cannot account for the effect of sensory adaptation on timing perception. PMID:25788590
Simulating stochastic dynamics using large time steps.
Corradini, O; Faccioli, P; Orland, H
2009-12-01
We present an approach to investigate the long-time stochastic dynamics of multidimensional classical systems, in contact with a heat bath. When the potential energy landscape is rugged, the kinetics displays a decoupling of short- and long-time scales and both molecular dynamics or Monte Carlo (MC) simulations are generally inefficient. Using a field theoretic approach, we perform analytically the average over the short-time stochastic fluctuations. This way, we obtain an effective theory, which generates the same long-time dynamics of the original theory, but has a lower time-resolution power. Such an approach is used to develop an improved version of the MC algorithm, which is particularly suitable to investigate the dynamics of rare conformational transitions. In the specific case of molecular systems at room temperature, we show that elementary integration time steps used to simulate the effective theory can be chosen a factor approximately 100 larger than those used in the original theory. Our results are illustrated and tested on a simple system, characterized by a rugged energy landscape. PMID:20365123
Projection Operator: A Step Towards Certification of Adaptive Controllers
NASA Technical Reports Server (NTRS)
Larchev, Gregory V.; Campbell, Stefan F.; Kaneshige, John T.
2010-01-01
One of the major barriers to wider use of adaptive controllers in commercial aviation is the lack of appropriate certification procedures. In order to be certified by the Federal Aviation Administration (FAA), an aircraft controller is expected to meet a set of guidelines on functionality and reliability while not negatively impacting other systems or safety of aircraft operations. Due to their inherent time-variant and non-linear behavior, adaptive controllers cannot be certified via the metrics used for linear conventional controllers, such as gain and phase margin. Projection Operator is a robustness augmentation technique that bounds the output of a non-linear adaptive controller while conforming to the Lyapunov stability rules. It can also be used to limit the control authority of the adaptive component so that the said control authority can be arbitrarily close to that of a linear controller. In this paper we will present the results of applying the Projection Operator to a Model-Reference Adaptive Controller (MRAC), varying the amount of control authority, and comparing controller s performance and stability characteristics with those of a linear controller. We will also show how adjusting Projection Operator parameters can make it easier for the controller to satisfy the certification guidelines by enabling a tradeoff between controller s performance and robustness.
Empirical versus time stepping with embedded error control for density-driven flow in porous media
NASA Astrophysics Data System (ADS)
Younes, Anis; Ackerer, Philippe
2010-08-01
Modeling density-driven flow in porous media may require very long computational time due to the nonlinear coupling between flow and transport equations. Time stepping schemes are often used to adapt the time step size in order to reduce the computational cost of the simulation. In this work, the empirical time stepping scheme which adapts the time step size according to the performance of the iterative nonlinear solver is compared to an adaptive time stepping scheme where the time step length is controlled by the temporal truncation error. Results of the simulations of the Elder problem show that (1) the empirical time stepping scheme can lead to inaccurate results even with a small convergence criterion, (2) accurate results are obtained when the time step size selection is based on the truncation error control, (3) a non iterative scheme with proper time step management can be faster and leads to more accurate solution than the standard iterative procedure with the empirical time stepping and (4) the temporal truncation error can have a significant effect on the results and can be considered as one of the reasons for the differences observed in the Elder numerical results.
Telepresence, time delay, and adaptation
NASA Technical Reports Server (NTRS)
Held, Richard; Durlach, Nathaniel
1989-01-01
Displays are now being used extensively throughout the society. More and more time is spent watching television, movies, computer screens, etc. Furthermore, in an increasing number of cases, the observer interacts with the display and plays the role of operator as well as observer. To a large extent, the normal behavior in the normal environment can also be thought of in these same terms. Taking liberties with Shakespeare, it might be said, all the world's a display and all the individuals in it are operators in and on the display. Within this general context of interactive display systems, a discussion is began with a conceptual overview of a particular class of such systems, namely, teleoperator systems. The notion is considered of telepresence and the factors that limit telepresence, including decorrelation between the: (1) motor output of the teleoperator as sensed directly via the kinesthetic/tactual system, and (2) the motor output of the teleoperator as sensed indirectly via feedback from the slave robot, i.e., via a visual display of the motor actions of the slave robot. Finally, the deleterious effect of time delay (a particular decorrelation) on sensory-motor adaptation (an important phenomenon related to telepresence) is examined.
Time to pause before the next step
Siemon, R.E.
1998-12-31
Many scientists, who have staunchly supported ITER for years, are coming to realize it is time to further rethink fusion energy`s development strategy. Specifically, as was suggested by Grant Logan and Dale Meade, and in keeping with the restructuring of 1996, a theme of better, cheaper, faster fusion would serve the program more effectively than ``demonstrating controlled ignition...and integrated testing of the high-heat-flux and nuclear components required to utilize fusion energy...`` which are the important ingredients of ITER`s objectives. The author has personally shifted his view for a mixture of technical and political reasons. On the technical side, he senses that through advanced tokamak research, spherical tokamak research, and advanced stellarator work, scientists are coming to a new understanding that might make a burning-plasma device significantly smaller and less expensive. Thus waiting for a few years, even ten years, seems prudent. Scientifically, there is fascinating physics to be learned through studies of burning plasma on a tokamak. And clearly if one wishes to study burning plasma physics in a sustained plasma, there is no other configuration with an adequate database on which to proceed. But what is the urgency of moving towards an ITER-like step focused on burning plasma? Some of the arguments put forward and the counter arguments are discussed here.
Seven Steps to On-Time Delivery.
ERIC Educational Resources Information Center
Konchar, Mark; Sanvido, Victor
1999-01-01
Describes seven steps to consider when making project-delivery decisions that include defining the school district's goals and profile, selecting the project-delivery system and procurement method, selecting the project team and contract type, and developing and confirming the facility program. Concluding comments address the district review of…
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
Schmitz, Gerd
2016-06-01
Two key features of sensorimotor adaptation are the directional selectivity of adaptive changes and the interference of adaptations to opposite directions. The present study investigated whether directional selectivity and interference of adaptation are related to executive functions and whether these phenomena differ between two methods for visuomotor adaptation. Subjects adapted at three target directions to clockwise or counterclockwise rotated feedback or to clockwise or counterclockwise target displacements (double steps). Both adaptation methods induce rotations of movement trajectories into the same direction, but provide visual information differently. The results showed that adaptation progressed differently between three targets. When movements adapted clockwise, adaptation was best at the most clockwise located target, and when movements adapted counterclockwise, it was best at the most counterclockwise located target, suggesting that spatial generalization between target directions is related to the direction of motor adaptation. The two adaptation methods produced different adaptation patterns, which indicate a further impact of visual information. A second adaptation to the other and opposite-directed discordance was worse than naive adaptation and washed out the aftereffects from the first adaptation, confirming that both adaptation methods interfered. Executive functions were significant covariate for overall interference and interference of target-specific adaptation. The results suggest that directional selectivity of adaptation is shaped by the direction of motor adaptation and the visual information provided. The interference of both adaptation methods indicates that they share adaptive mechanisms for recalibration. The interference is the lower the better subjects are able to cognitively switch between tasks and to inhibit prepotent responses. Therefore, cognitive functions seem to be involved in the inhibition of non-adequate sensorimotor
Time scaling relations for step bunches from models with step-step attractions (B1-type models)
NASA Astrophysics Data System (ADS)
Krasteva, A.; Popova, H.; Akutsu, N.; Tonchev, V.
2016-03-01
The step bunching instability is studied in three models of step motion defined in terms of ordinary differential equations (ODE). The source of instability in these models is step-step attraction, it is opposed by step-step repulsion and the developing surface patterns reflect the balance between the two. The first model, TE2, is a generalization of the seminal model of Tersoff et al. (1995). The second one, LW2, is obtained from the model of Liu and Weeks (1998) using the repulsions term to construct the attractions one with retained possibility to change the parameters in the two independently. The third model, MM2, is a minimal one constructed ad hoc and in this article it plays a central role. New scheme for scaling the ODE in vicinal studies is applied towards deciphering the pre-factors in the time-scaling relations. In all these models the patterned surface is self-similar - only one length scale is necessary to describe its evolution (hence B1-type). The bunches form finite angles with the terraces. Integrating numerically the equations for step motion and changing systematically the parameters we obtain the overall dependence of time-scaling exponent β on the power of step-step attractions p as β = 1/(3+p) for MM2 and hypothesize based on restricted set of data that it is β = 1/(5+p) for LW2 and TE2.
Improving quality: one step at a time.
Blankson-seck, N; Butta, P
1999-01-01
The notion that health care workers have the power to improve the quality of their services is a key to AVSC's efforts worldwide. The COPE process, AVSC's low-cost intervention for improving quality at service sites, brings together supervisors and staff at all levels to identify barriers to quality services and helps them find solutions they can implement with their own resources. For example, a hospital in Tanzania had tried unsuccessfully to obtain the funds to repair or replace broken equipment. Using the COPE process, the hospital used available funds to send a technician for training in maintenance and repair. Now everything from blood pressure equipment to bedsprings is repaired promptly, and quality has improved. Another hospital in Tanzania coped with the problem of broken bedsprings (patients were putting mattresses on the floor) by using readily available wire mesh to make repairs. In Kenya, the lack of running water forced staff to collect water from a cistern, taking time from their other responsibilities. During a COPE meeting to resolve the problem the staff bemoaned the fact that they did not have the funds to replace the water system. Then the gardener told the group that all they needed to do was fix a broken pipe. The repair was made at minimal cost, and the water supply was restored. The COPE process reveals that health care staff not only can identify obstacles to quality, they often know the cause of the problem and can offer the best solutions. PMID:12295155
Space-time adaptive numerical methods for geophysical applications.
Castro, C E; Käser, M; Toro, E F
2009-11-28
In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost. PMID:19840984
Multiple-time-stepping generalized hybrid Monte Carlo methods
NASA Astrophysics Data System (ADS)
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Time step and shadow Hamiltonian in molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Kim, Sangrak
2015-08-01
We examine the time step and the shadow Hamiltonian of symplectic algorithms for a bound system of a simple harmonic oscillator as a specific example. The phase space trajectory moves on the hyperplane of a constant shadow Hamiltonian. We find a stationary condition for the time step τ n with which the motion repeats itself on the phase space with a period n. Interestingly, that the time steps satisfying the stationary condition turn out to be independent of the symplectic algorithms chosen. Furthermore, the phase volume enclosed by the phase trajectory is given by n τ n Ẽ n , where Ẽ n is the initial shadow energy of the corresponding symplectic algorithm.
Obtaining Runge-Kutta Solutions Between Time Steps
NASA Technical Reports Server (NTRS)
Horn, M. K.
1984-01-01
New interpolation method used with existing Runge-Kutta algorithms. Algorithm evaluates solution at intermediate point within integration step. Only few additional computations required to produce intermediate solution data. Runge-Kutta method provides accurate solution with larger time steps than allowable in other methods.
Improving Leadership and Management Practices: One Step at a Time
ERIC Educational Resources Information Center
Bella, Jill
2008-01-01
Taking small steps toward change is a sensible way to improve the leadership and management practices in an early care and education program. A director must be able to make continuous improvements without alienating staff by asking them to make drastic changes that seem overwhelming and unachievable. Taking on change one step at a time is a way…
Automatic Time Stepping with Global Error Control for Groundwater Flow Models
Tang, Guoping
2008-09-01
An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.
Margul, Daniel T; Tuckerman, Mark E
2016-05-10
Molecular dynamics remains one of the most widely used computational tools in the theoretical molecular sciences to sample an equilibrium ensemble distribution and/or to study the dynamical properties of a system. The efficiency of a molecular dynamics calculation is limited by the size of the time step that can be employed, which is dictated by the highest frequencies in the system. However, many properties of interest are connected to low-frequency, long time-scale phenomena, requiring many small time steps to capture. This ubiquitous problem can be ameliorated by employing multiple time-step algorithms, which assign different time steps to forces acting on different time scales. In such a scheme, fast forces are evaluated more frequently than slow forces, and as the former are often computationally much cheaper to evaluate, the savings can be significant. Standard multiple time-step approaches are limited, however, by resonance phenomena, wherein motion on the fastest time scales limits the step sizes that can be chosen for the slower time scales. In atomistic models of biomolecular systems, for example, the largest time step is typically limited to around 5 fs. Previously, we introduced an isokinetic extended phase-space algorithm (Minary et al. Phys. Rev. Lett. 2004, 93, 150201) and its stochastic analog (Leimkuhler et al. Mol. Phys. 2013, 111, 3579) that eliminate resonance phenomena through a set of kinetic energy constraints. In simulations of a fixed-charge flexible model of liquid water, for example, the time step that could be assigned to the slow forces approached 100 fs. In this paper, we develop a stochastic isokinetic algorithm for multiple time-step molecular dynamics calculations using a polarizable model based on fluctuating dipoles. The scheme developed here employs two sets of induced dipole moments, specifically, those associated with short-range interactions and those associated with a full set of interactions. The scheme is demonstrated on
Accuracy-based time step criteria for solving parabolic equations
Mohtar, R.; Segerlind, L.
1995-12-31
Parabolic equations govern many transient engineering problems. Space integration using finite element or finite difference methods changes the parabolic partial differential equation into an ordinary differential equation. Time integration schemes are needed to solve the later equation. In order to accurately perform the later integration a proper time step must be provided. Time step estimates based on a stability criteria have been prescribed in the literature. The following paper presents time step estimates that satisfy stability as well as accuracy criteria. These estimates were correlated to the Froude and Courant Numbers. The later criteria were found to be overly conservative for some integration schemes. Suggestions as to which time integration scheme is the best to use are also presented.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Adaptive licensing: taking the next step in the evolution of drug approval.
Eichler, H-G; Oye, K; Baird, L G; Abadie, E; Brown, J; Drum, C L; Ferguson, J; Garner, S; Honig, P; Hukkelhoven, M; Lim, J C W; Lim, R; Lumpkin, M M; Neil, G; O'Rourke, B; Pezalla, E; Shoda, D; Seyfert-Margolis, V; Sigal, E V; Sobotka, J; Tan, D; Unger, T F; Hirsch, G
2012-03-01
Traditional drug licensing approaches are based on binary decisions. At the moment of licensing, an experimental therapy is presumptively transformed into a fully vetted, safe, efficacious therapy. By contrast, adaptive licensing (AL) approaches are based on stepwise learning under conditions of acknowledged uncertainty, with iterative phases of data gathering and regulatory evaluation. This approach allows approval to align more closely with patient needs for timely access to new technologies and for data to inform medical decisions. The concept of AL embraces a range of perspectives. Some see AL as an evolutionary step, extending elements that are now in place. Others envision a transformative framework that may require legislative action before implementation. This article summarizes recent AL proposals; discusses how proposals might be translated into practice, with illustrations in different therapeutic areas; and identifies unresolved issues to inform decisions on the design and implementation of AL. PMID:22336591
Dependence of aqua-planet simulations on time step
NASA Astrophysics Data System (ADS)
Williamson, David L.; Olson, Jerry G.
2003-04-01
Aqua-planet simulations with Eulerian and semi-Lagrangian dynamical cores coupled to the NCAR CCM3 parametrization suite produce very different zonal average precipitation patterns. The model with the Eulerian core forms a narrow single precipitation peak centred on the sea surface temperature (SST) maximum. The one with the semi-Lagrangian core forms a broad structure often with a double peak straddling the SST maximum with a precipitation minimum centred on the SST maximum. The different structure is shown to be caused primarily by the different time step adopted by each core and its effect on the parametrizations rather than by different truncation errors introduced by the dynamical cores themselves. With a longer discrete time step, the surface exchange parametrization deposits more moisture in the atmosphere in a single time step, resulting in convection being initiated farther from the equator, closer to the maximum source. Different diffusive smoothing associated with different spectral resolutions is a secondary effect influencing the strength of the double structure. When the semi-Lagrangian core is configured to match the Eulerian with the same time step, a three-time-level formulation and same spectral truncation it produces precipitation fields similar to those from the Eulerian. It is argued that the broad and double structure forms in this model with the longer time step because more water is put into the atmosphere over a longer discrete time step, the evaporation rate being the same. The additional water vapour in the region of equatorial moisture convergence results in more convective available potential energy farther from the equator which allows convection to initiate farther from the equator.The resulting heating drives upward vertical motion and low-level convergence away from the equator, resulting in much weaker upward motion at the equator. The feedback between the convective heating and dynamics reduces the instability at the equator and
Short-term Time Step Convergence in a Climate Model
Wan, Hui; Rasch, Philip J.; Taylor, Mark; Jablonowski, Christiane
2015-02-11
A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to the expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.
Short-term Time Step Convergence in a Climate Model
Wan, Hui; Rasch, Philip J.; Taylor, Mark; Jablonowski, Christiane
2015-02-11
A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less
Accuracy of Pedometer Steps and Time for Youth with Disabilities
ERIC Educational Resources Information Center
Beets, Michael W.; Combs, Cindy; Pitetti, Kenneth H.; Morgan, Melinda; Bryan, Rebecca R.; Foley, John T.
2007-01-01
The purpose of the study was to examine the accuracy of pedometer steps and activity time (Walk4Life, WL) for youth with developmental disabilities. Eighteen youth (11 girls, 7 boys) 4-14 years completed six 80-meter self-paced walking trials while wearing a pedometer at five waist locations (front right, front left, back right, back left, middle…
Solving delay differential equations in S-ADAPT by method of steps.
Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech
2013-09-01
S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. PMID:23810514
Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach
2014-01-01
Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model
Lin, Paul Tinphone; Jameson, Antony, 1934-; Baker, Timothy J.; Martinelli, Luigi
2005-01-01
An implicit multigrid-driven algorithm for two-dimensional incompressible laminar viscous flows has been coupled with a solution adaptation method and a mesh movement method for boundary movement. Time-dependent calculations are performed implicitly by regarding each time step as a steady-state problem in pseudo-time. The method of artificial compressibility is used to solve the flow equations. The solution mesh adaptation method performs local mesh refinement using an incremental Delaunay algorithm and mesh coarsening by means of edge collapse. Mesh movement is achieved by modeling the computational domain as an elastic solid and solving the equilibrium equations for the stress field. The solution adaptation method has been validated by comparison with experimental results and other computational results for low Reynolds number flow over a shedding circular cylinder. Preliminary validation of the mesh movement method has been demonstrated by a comparison with experimental results of an oscillating airfoil and with computational results for an oscillating cylinder.
Durham adaptive optics real-time controller.
Basden, Alastair; Geng, Deli; Myers, Richard; Younger, Eddy
2010-11-10
The Durham adaptive optics (AO) real-time controller was initially a proof of concept design for a generic AO control system. It has since been developed into a modern and powerful central-processing-unit-based real-time control system, capable of using hardware acceleration (including field programmable gate arrays and graphical processing units), based primarily around commercial off-the-shelf hardware. It is powerful enough to be used as the real-time controller for all currently planned 8 m class telescope AO systems. Here we give details of this controller and the concepts behind it, and report on performance, including latency and jitter, which is less than 10 μs for small AO systems. PMID:21068868
NASA Astrophysics Data System (ADS)
Aronoff, H. I.; Leslie, J. J.; Mittleman, A. N.; Holt, S.
1983-11-01
This manual describes a Shared Time Engineering Program (STEP) conducted by the New England Apparel Manufacturers Association (NEAMA) headquartered in Fall River Massachusetts, and funded by the Office of Trade Adjustment Assistance of the U.S. Department of Commerce. It is addressed to industry association executives, industrial engineers and others interested in examining an innovative model of industrial engineering assistance to small plants which might be adapted to their particular needs.
ERIC Educational Resources Information Center
Pitetti, Kenneth H.; Beets, Michael W.; Flaming, Judy
2009-01-01
Pedometer accuracy for steps and activity time during dynamic movement for youth with intellectual disabilities (ID) were examined. Twenty-four youth with ID (13 girls, 13.1 [plus or minus] 3.2 yrs; 11 boys, 14.7 [plus or minus] 2.7 yrs) were videotaped during adapted physical education class while wearing a Walk4Life 2505 pedometer in five…
Adapting STePS, an Adult Team Problem Solving Model, for Use with Sixth Grade Students.
ERIC Educational Resources Information Center
Sheive, L. T.; And Others
Structured Team Problem Solving (STePS) is a problem solving model for shared decision making. This project uses the model to discover if children can learn using this method, and what adaptations would be necessary for child use. Sixth grade students in their social studies class worked together in teams (6-8) to identify what they already think…
Multiple time step integrators in ab initio molecular dynamics
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
A method for improving time-stepping numerics
NASA Astrophysics Data System (ADS)
Williams, P. D.
2012-04-01
In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.
Improving Adaptive Learning Technology through the Use of Response Times
ERIC Educational Resources Information Center
Mettler, Everett; Massey, Christine M.; Kellman, Philip J.
2011-01-01
Adaptive learning techniques have typically scheduled practice using learners' accuracy and item presentation history. We describe an adaptive learning system (Adaptive Response Time Based Sequencing--ARTS) that uses both accuracy and response time (RT) as direct inputs into sequencing. Response times are used to assess learning strength and…
Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping
NASA Technical Reports Server (NTRS)
Suresh, A.; Huynh, H. T.
1997-01-01
A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.
Real-time adaptive video image enhancement
NASA Astrophysics Data System (ADS)
Garside, John R.; Harrison, Chris G.
1999-07-01
As part of a continuing collaboration between the University of Manchester and British Aerospace, a signal processing array has been constructed to demonstrate that it is feasible to compensate a video signal for the degradation caused by atmospheric haze in real-time. Previously reported work has shown good agreement between a simple physical model of light scattering by atmospheric haze and the observed loss of contrast. This model predicts a characteristic relationship between contrast loss in the image and the range from the camera to the scene. For an airborne camera, the slant-range to a point on the ground may be estimated from the airplane's pose, as reported by the inertial navigation system, and the contrast may be obtained from the camera's output. Fusing data from these two streams provides a means of estimating model parameters such as the visibility and the overall illumination of the scene. This knowledge allows the same model to be applied in reverse, thus restoring the contrast lost to atmospheric haze. An efficient approximation of range is vital for a real-time implementation of the method. Preliminary results show that an adaptive approach to fitting the model's parameters, exploiting the temporal correlation between video frames, leads to a robust implementation with a significantly accelerated throughput.
A cascade reaction network mimicking the basic functional steps of adaptive immune response
NASA Astrophysics Data System (ADS)
Han, Da; Wu, Cuichen; You, Mingxu; Zhang, Tao; Wan, Shuo; Chen, Tao; Qiu, Liping; Zheng, Zheng; Liang, Hao; Tan, Weihong
2015-10-01
Biological systems use complex ‘information-processing cores’ composed of molecular networks to coordinate their external environment and internal states. An example of this is the acquired, or adaptive, immune system (AIS), which is composed of both humoral and cell-mediated components. Here we report the step-by-step construction of a prototype mimic of the AIS that we call an adaptive immune response simulator (AIRS). DNA and enzymes are used as simple artificial analogues of the components of the AIS to create a system that responds to specific molecular stimuli in vitro. We show that this network of reactions can function in a manner that is superficially similar to the most basic responses of the vertebrate AIS, including reaction sequences that mimic both humoral and cellular responses. As such, AIRS provides guidelines for the design and engineering of artificial reaction networks and molecular devices.
Yu, Yuan-jin; Fang, Jian-cheng; Xiang, Biao; Wang, Chun-e
2014-11-01
Two-dimensional gyroscopic torque can be produced by tilting the rotor shaft of the active magnetically suspended momentum wheel. The nonlinear magnetic torque is analyzed and then an adaptive back-stepping tracking method is proposed to deal with the nonlinearity and uncertainty. The nonlinearity of magnetic torque is represented as bounded unknown uncertainty stiffness, and an adaptive law is proposed to estimate the stiffness. Combined with back-stepping method, the proposed method can deal with the uncertainty. This method is designed by Lyapunov stability theory to ensure the stability, and its effectiveness is validated by simulations and experiments. These results indicate that this method can realize higher tracking precision and faster tracking velocity than the conventional cross feedback method to provide high precision and wide bandwidth outputting torque. PMID:25104645
Garcia-Molla, V M; Liberos, A; Vidal, A; Guillem, M S; Millet, J; Gonzalez, A; Martinez-Zaldivar, F J; Climent, A M
2014-01-01
In this paper we studied the implementation and performance of adaptive step methods for large systems of ordinary differential equations systems in graphics processing units, focusing on the simulation of three-dimensional electric cardiac activity. The Rush-Larsen method was applied in all the implemented solvers to improve efficiency. We compared the adaptive methods with the fixed step methods, and we found that the fixed step methods can be faster while the adaptive step methods are better in terms of accuracy and robustness. PMID:24377685
Analysis of steps adapted protocol in cardiac rehabilitation in the hospital phase
Winkelmann, Eliane Roseli; Dallazen, Fernanda; Bronzatti, Angela Beerbaum Steinke; Lorenzoni, Juliara Cristina Werner; Windmöller, Pollyana
2015-01-01
Objective To analyze a cardiac rehabilitation adapted protocol in physical therapy during the postoperative hospital phase of cardiac surgery in a service of high complexity, in aspects regarded to complications and mortality prevalence and hospitalization days. Methods This is an observational cross-sectional, retrospective and analytical study performed by investigating 99 patients who underwent cardiac surgery for coronary artery bypass graft, heart valve replacement or a combination of both. Step program adapted for rehabilitation after cardiac surgery was analyzed under the command of the physiotherapy professional team. Results In average, a patient stays for two days in the Intensive Care Unit and three to four days in the hospital room, totalizing six days of hospitalization. Fatalities occurred in a higher percentage during hospitalization (5.1%) and up to two years period (8.6%) when compared to 30 days after hospital discharge (1.1%). Among the postoperative complications, the hemodynamic (63.4%) and respiratory (42.6%) were the most prevalent. 36-42% of complications occurred between the immediate postoperative period and the second postoperative day. The hospital discharge started from the fifth postoperative day. We can observe that in each following day, the patients are evolving in achieving the Steps, where Step 3 was the most used during the rehabilitation phase I. Conclusion This evolution program by steps can to guide the physical rehabilitation at the hospital in patients after cardiac surgery. PMID:25859866
Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy.
Zuo, Chao; Sun, Jiasong; Chen, Qian
2016-09-01
The incremental gradient approaches, such as PIE and ePIE, are widely used in the field of ptychographic imaging due to their great flexibility and computational efficiency. Nevertheless, their stability and reconstruction quality may be significantly degraded when non-negligible noise is present in the image. Though this problem is often attributed to the non-convex nature of phase retrieval, we found the reason for this is more closely related to the choice of the step-size, which needs to be gradually diminishing for convergence even in the convex case. To this end, we introduce an adaptive step-size strategy that decreases the step-size whenever sufficient progress is not made. The synthetic and real experiments on Fourier ptychographic microscopy show that the adaptive step-size strategy significantly improves the stability and robustness of the reconstruction towards noise yet retains the fast initial convergence speed of PIE and ePIE. More importantly, the proposed approach is simple, nonparametric, and does not require any preknowledge about the noise statistics. The great performance and limited computational complexity make it a very attractive and promising technique for robust Fourier ptychographic microscopy under noisy conditions. PMID:27607676
Adaptive Controller Adaptation Time and Available Control Authority Effects on Piloting
NASA Technical Reports Server (NTRS)
Trujillo, Anna; Gregory, Irene
2013-01-01
Adaptive control is considered for highly uncertain, and potentially unpredictable, flight dynamics characteristic of adverse conditions. This experiment looked at how adaptive controller adaptation time to recover nominal aircraft dynamics affects pilots and how pilots want information about available control authority transmitted. Results indicate that an adaptive controller that takes three seconds to adapt helped pilots when looking at lateral and longitudinal errors. The controllability ratings improved with the adaptive controller, again the most for the three seconds adaptation time while workload decreased with the adaptive controller. The effects of the displays showing the percentage amount of available safe flight envelope used in the maneuver were dominated by the adaptation time. With the displays, the altitude error increased, controllability slightly decreased, and mental demand increased. Therefore, the displays did require some of the subjects resources but these negatives may be outweighed by pilots having more situation awareness of their aircraft.
NASA Astrophysics Data System (ADS)
Kuraz, Michal
2016-06-01
Modelling the transport processes in a vadose zone, e.g. modelling contaminant transport or the effect of the soil water regime on changes in soil structure and composition, plays an important role in predicting the reactions of soil biotopes to anthropogenic activity. Water flow is governed by the quasilinear Richards equation. The paper concerns the implementation of a multi-time-step approach for solving a nonlinear Richards equation. When modelling porous media flow with a Richards equation, due to a possible convection dominance and a convergence of a nonlinear solver, a stable finite element approximation requires accurate temporal and spatial integration. The method presented here enables adaptive domain decomposition algorithm together with a multi-time-step treatment of actively changing subdomains.
Adaptability of stride-to-stride control of stepping movements in human walking.
Bohnsack-McLagan, Nicole K; Cusumano, Joseph P; Dingwell, Jonathan B
2016-01-25
Humans continually adapt their movements as they walk on different surfaces, avoid obstacles, etc. External (environmental) and internal (physiological) noise-like disturbances, and the responses that correct for them, each contribute to locomotor variability. This variability may sometimes be detrimental (perhaps increasing fall risk), or sometimes beneficial (perhaps reflecting exploration of multiple task solutions). Here, we determined how humans regulated stride-to-stride fluctuations in walking when presented different task goals that allowed them to exploit inherent redundancies in different ways. Fourteen healthy adults walked on a treadmill under each of four conditions: constant speed only (SPD), constant speed and stride length (LEN), constant speed and stride time (TIM), or constant speed, stride length, and stride time (ALL). Multiple analyses tested competing hypotheses that participants might attempt to either equally satisfy all goals simultaneously, or instead adopt systematic intermediate strategies that only partly satisfied each individual goal. Participants exhibited similar average stepping behavior, but significant differences in variability and stride-to-stride serial correlations across conditions. Analyses of the structure of stride-to-stride fluctuation dynamics demonstrated humans resolved the competing goals presented not by minimizing errors equally with respect to all goals, but instead by trying to only partly satisfy each goal. Thus, humans exploit task redundancies even when they are explicitly removed from the task specifications. These findings may help identify when variability is predictive of, or protective against, fall risk. They may also help inform rehabilitation interventions to better exploit the positive contributions of variability, while minimizing the negative. PMID:26725217
Ho, Ngoc-Huynh; Truong, Phuc Huu; Jeong, Gu-Min
2016-01-01
We propose a walking distance estimation method based on an adaptive step-length estimator at various walking speeds using a smartphone. First, we apply a fast Fourier transform (FFT)-based smoother on the acceleration data collected by the smartphone to remove the interference signals. Then, we analyze these data using a set of step-detection rules in order to detect walking steps. Using an adaptive estimator, which is based on a model of average step speed, we accurately obtain the walking step length. To evaluate the accuracy of the proposed method, we examine the distance estimation for four different distances and three speed levels. The experimental results show that the proposed method significantly outperforms conventional estimation methods in terms of accuracy. PMID:27598171
Accelerating spectral-element simulations of seismic wave propagation using local time stepping
NASA Astrophysics Data System (ADS)
Peter, D. B.; Rietmann, M.; Galvez, P.; Nissen-Meyer, T.; Grote, M.; Schenk, O.
2013-12-01
Seismic tomography using full-waveform inversion requires accurate simulations of seismic wave propagation in complex 3D media. However, finite element meshing in complex media often leads to areas of local refinement, generating small elements that accurately capture e.g. strong topography and/or low-velocity sediment basins. For explicit time schemes, this dramatically reduces the global time-step for wave-propagation problems due to numerical stability conditions, ultimately making seismic inversions prohibitively expensive. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. Numerical simulations are thus liberated of global time-step constraints potentially speeding up simulation runtimes significantly. We present here a new, efficient multi-level LTS-Newmark scheme for general use with spectral-element methods (SEM) with applications in seismic wave propagation. We fit the implementation of our scheme onto the package SPECFEM3D_Cartesian, which is a widely used community code, simulating seismic and acoustic wave propagation in earth-science applications. Our new LTS scheme extends the 2nd-order accurate Newmark time-stepping scheme, and leads to an efficient implementation, producing real-world speedup of multi-resolution seismic applications. Furthermore, we generalize the method to utilize many refinement levels with a design specifically for continuous finite elements. We demonstrate performance speedup using a state-of-the-art dynamic earthquake rupture model for the Tohoku-Oki event, which is currently limited by small elements along the rupture fault. Utilizing our new algorithmic LTS implementation together with advances in exploiting graphic processing units (GPUs), numerical seismic wave propagation simulations in complex media will dramatically reduce computation times, empowering high
Energy Science and Technology Software Center (ESTSC)
2014-06-01
ARKode is part of a software family called SUNDIALS: SUite of Nonlinear and Differential/ALgebraic equation Solvers [1]. The ARKode solver library provides an adaptive-step time integration package for stiff, nonstiff and multi-rate systems of ordinary differential equations (ODEs) using Runge Kutta methods [2].
Experiments on the role of deleterious mutations as stepping stones in adaptive evolution.
Covert, Arthur W; Lenski, Richard E; Wilke, Claus O; Ofria, Charles
2013-08-20
Many evolutionary studies assume that deleterious mutations necessarily impede adaptive evolution. However, a later mutation that is conditionally beneficial may interact with a deleterious predecessor before it is eliminated, thereby providing access to adaptations that might otherwise be inaccessible. It is unknown whether such sign-epistatic recoveries are inconsequential events or an important factor in evolution, owing to the difficulty of monitoring the effects and fates of all mutations during experiments with biological organisms. Here, we used digital organisms to compare the extent of adaptive evolution in populations when deleterious mutations were disallowed with control populations in which such mutations were allowed. Significantly higher fitness levels were achieved over the long term in the control populations because some of the deleterious mutations served as stepping stones across otherwise impassable fitness valleys. As a consequence, initially deleterious mutations facilitated the evolution of complex, beneficial functions. We also examined the effects of disallowing neutral mutations, of varying the mutation rate, and of sexual recombination. Populations evolving without neutral mutations were able to leverage deleterious and compensatory mutation pairs to overcome, at least partially, the absence of neutral mutations. Substantially raising or lowering the mutation rate reduced or eliminated the long-term benefit of deleterious mutations, but introducing recombination did not. Our work demonstrates that deleterious mutations can play an important role in adaptive evolution under at least some conditions. PMID:23918358
A new time-stepping method for regional climate models
NASA Astrophysics Data System (ADS)
Williams, P. D.
2010-12-01
The dynamical cores of many regional climate models use the Robert-Asselin filter to suppress the spurious computational mode of the leapfrog scheme. Unfortunately, whilst successfully eliminating the unwanted mode, the Robert-Asselin filter also weakly suppresses the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the filter does not conserve the mean state, averaged over the three time slices on which it operates. This presentation proposes a simple modification to the Robert-Asselin filter, which does conserve the three-time-level mean state. When used in conjunction with the leapfrog scheme, the modification vastly reduces the artificial damping of the physical solution. Correspondingly, the modification increases the numerical accuracy for amplitude errors by two orders, yielding third-order accuracy. The modified filter may easily be incorporated into existing regional climate models, via the addition of only a few lines of code that are computationally very inexpensive. Results will be shown from recent implementations of the modified filter in various models. The modification will be shown to reduce model biases and to significantly improve the predictive skill. Magnitude of the complex amplification factor as a function of the non-dimensional time step, for leapfrog integrations. This quantity would be identical to 1 for a perfect numerical scheme. Clearly, the filter proposed here (case α=0.53) has much smaller numerical errors than the original Robert-Asselin filter (case α=1). Moreover, the proposed filter is trivial to implement and is no more computationally expensive. Taken from Williams (2009; Monthly Weather Review).
Watching Proteins Direct Crystal Growth One Step at a Time
2009-01-01
Researchers at Berkeley Labs Molecular Foundry use an atomic force microscope to record this movie of a peptide being adsorbed to a crystal surface while two successive crystal steps interact, then progress beyond the peptide. The peptide temporarily slows the step before transferring up to the next atomic layer. The lattice pattern on the surface corresponds to the molecular structure of the underlying crystal.
The USMLE Step 2 CS: Time for a change.
Alvin, Matthew D
2016-08-01
The United States Medical Licensing Examination (USMLE(®)) Steps are a series of mandatory licensing assessments for all allopathic (MD degree) medical students in their transition from student to intern to resident physician. Steps 1, 2 Clinical Knowledge (CK), and 3 are daylong multiple-choice exams that quantify a medical student's basic science and clinical knowledge as well as their application of that knowledge using a three-digit score. In doing so, these Steps provide a standardized assessment that residency programs use to differentiate applicants and evaluate their competitiveness. Step 2 Clinical Skills (CS), the only other Step exam and the second component of Step 2, was created in 2004 to test clinical reasoning and patient-centered skills. As a Pass/Fail exam without a numerical scoring component, Step 2 CS provides minimal differentiation among applicants for residency programs. In this personal view article, it is argued that the current Step 2 CS exam should be eliminated for US medical students and propose an alternative consistent with the mission and purpose of the exam that imposes less of a burden on medical students. PMID:27007882
Constrained Density Functional Theory by Imaginary Time-Step Method
NASA Astrophysics Data System (ADS)
Kidd, Daniel
Constrained Density Functional Theory (CDFT) has been a popular choice within the last decade for sidestepping the self interaction problem within long-range charge transfer calculations. Typically an inner constraint loop is added within the self-consistent field iterations of DFT in order to enforce this charge transfer state by means of a Lagrange multiplier method. In this work, an alternate implementation of CDFT is introduced, that of the imaginary time-step method, which lends itself more readily to real space calculations in the ability to solve numerically for 3D local external potentials which enforce arbitrary given densities. This method has been shown to reproduce the proper 1 / R dependence of charge transfer systems in real space calculations as well as the ability to generate useful constraint potentials. As an example application, this method is shown to be capable of describing defects within periodic systems using finite calculations by constraining the 3D density to that of the periodically calculated perfect system at the boundaries.
DNA walks one step at a time in electrophoresis
NASA Astrophysics Data System (ADS)
Guan, Juan; Wang, Bo; Granick, Steve
2011-03-01
Testing the classical view that in DNA gel electrophoresis, long polymer chains navigate through their gel environment via reptation, we reach a different conclusion: this driven motion proceeds by stick-slip. Our single-molecule experiments visualize fluorescent-labeled lambda-DNA, whose intramolecular conformations are resolved with 30 ms resolution using home-written software. Combining hundreds to thousands of trajectories under amplitudes of electric field ranging from zero to large, we quantify the full statistical distribution of motion with unprecedented statistics. Pauses are seen between steps of driven motion, probably reflecting that the chain is trapped inside the gel matrix. The pausing time is exponentially distributed and decreases with increasing electric field strength, suggesting that the jerky behavior is an activated process, facilitated by electric field. We propose a stretch-assisted mechanism: that the energy barrier to move through the gel environment is first overcome by a leading segment, the ensuing intramolecular stress from stretching causing lagging segments to recoil and follow along.
Predicting Hyper-Chaotic Time Series Using Adaptive Higher-Order Nonlinear Filter
NASA Astrophysics Data System (ADS)
Zhang, Jia-Shu; Xiao, Xian-Ci
2001-03-01
A newly proposed method, i.e. the adaptive higher-order nonlinear finite impulse response (HONFIR) filter based on higher-order sparse Volterra series expansions, is introduced to predict hyper-chaotic time series. The effectiveness of using the adaptive HONFIR filter for making one-step and multi-step predictions is tested based on very few data points by computer-generated hyper-chaotic time series including the Mackey-Glass equation and four-dimensional nonlinear dynamical system. A comparison is made with some neural networks for predicting the Mackey-Glass hyper-chaotic time series. Numerical simulation results show that the adaptive HONFIR filter proposed here is a very powerful tool for making prediction of hyper-chaotic time series.
An averaging analysis of discrete-time indirect adaptive control
NASA Technical Reports Server (NTRS)
Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.
1988-01-01
An averaging analysis of indirect, discrete-time, adaptive control systems is presented. The analysis results in a signal-dependent stability condition and accounts for unmodeled plant dynamics as well as exogenous disturbances. This analysis is applied to two discrete-time adaptive algorithms: an unnormalized gradient algorithm and a recursive least-squares (RLS) algorithm with resetting. Since linearization and averaging are used for the gradient analysis, a local stability result valid for small adaptation gains is found. For RLS with resetting, the assumption is that there is a long time between resets. The results for the two algorithms are virtually identical, emphasizing their similarities in adaptive control.
Effect of Margin Design and Processing Steps on Marginal Adaptation of Captek Restorations
Shih, Amy; Flinton, Robert; Vaidyanathan, Jayalakshmi; Vaidyanathan, Tritala
2011-01-01
This study examined the effect of four margin designs on marginal adaptation of Captek crowns during selected processing steps. Twenty-four Captek crowns were fabricated, six each of four margin designs: shoulder (Group A), chamfer (Group B), chamfer with bevel (Group C), and shoulder with bevel (Group D). Marginal discrepancies between crowns and matching dies were measured at selected points for each sample at the coping stage (Stage 1), following porcelain application (Stage 2) and cementation (Stage 3). Digital imaging methods were used to measure marginal gap. The results indicate decreasing trend of margin gap as a function of margin design in the order A>B>C>D. Between processing steps, the trend was in the order Stage 3 < Stage 1 < Stage 2. Porcelain firing had no significant effect on marginal adaptation, but cementation decreased the marginal gap. Generally, the margin gap in Captek restorations were in all cases less than the reported acceptable range of margin gaps for ceramometal restorations. These results are clinically favorable outcomes and may be associated with the ductility and burnishability of matrix phase in Captek metal coping margins. PMID:21991488
Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario
NASA Astrophysics Data System (ADS)
Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.
2009-12-01
Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.
NASA Astrophysics Data System (ADS)
Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.
2011-12-01
The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.
NASA Astrophysics Data System (ADS)
Balac, Stéphane; Fernandez, Arnaud
2014-10-01
In optics the nonlinear Schrödinger equation (NLSE) which modelizes light-wave propagation in an optical fibre is the most widely solved by the Symmetric Split-Step method. The practical efficiency of the Symmetric Split-Step method is highly dependent on the computational grid points distribution along the fiber and therefore an efficient adaptive step-size control strategy is mandatory. A lot of adaptive step-size methods designed to be used in conjunction with the Symmetric Split-Step method for solving the various forms taken by the NLSE can be found in the literature dedicated to optics. These methods can be gathered together into two groups. Broadly speaking, a first group of methods is based on the observation along the propagation length of the behavior of a given optical quantity (e.g. the photons number) and the step-size at each computational step is set so as to guarantee that the known properties of the quantity are preserved. Most of the time these approaches are derived under specific assumptions and the step-size selection criterion depends on the fiber parameters. The second group of methods makes use of some mathematical concepts to estimate the local error at each computational grid point and the step-size is set so as to maintain it lower than a prescribed tolerance. This approach should be preferred due to its generality of use but suffers from a lack of understanding in the mathematical concepts of numerical analysis it involves. The aim of this paper is to present an analysis of local error estimate and adaptive step-size control techniques for solving the NSLE by the Symmetric Split-Step method with all the unavoidable mathematical rigor required for a comprehensive understanding of the topic.
A New Modified Artificial Bee Colony Algorithm with Exponential Function Adaptive Steps
Mao, Wei; Li, Hao-ru
2016-01-01
As one of the most recent popular swarm intelligence techniques, artificial bee colony algorithm is poor at exploitation and has some defects such as slow search speed, poor population diversity, the stagnation in the working process, and being trapped into the local optimal solution. The purpose of this paper is to develop a new modified artificial bee colony algorithm in view of the initial population structure, subpopulation groups, step updating, and population elimination. Further, depending on opposition-based learning theory and the new modified algorithms, an improved S-type grouping method is proposed and the original way of roulette wheel selection is substituted through sensitivity-pheromone way. Then, an adaptive step with exponential functions is designed for replacing the original random step. Finally, based on the new test function versions CEC13, six benchmark functions with the dimensions D = 20 and D = 40 are chosen and applied in the experiments for analyzing and comparing the iteration speed and accuracy of the new modified algorithms. The experimental results show that the new modified algorithm has faster and more stable searching and can quickly increase poor population diversity and bring out the global optimal solutions. PMID:27293426
A New Modified Artificial Bee Colony Algorithm with Exponential Function Adaptive Steps.
Mao, Wei; Lan, Heng-You; Li, Hao-Ru
2016-01-01
As one of the most recent popular swarm intelligence techniques, artificial bee colony algorithm is poor at exploitation and has some defects such as slow search speed, poor population diversity, the stagnation in the working process, and being trapped into the local optimal solution. The purpose of this paper is to develop a new modified artificial bee colony algorithm in view of the initial population structure, subpopulation groups, step updating, and population elimination. Further, depending on opposition-based learning theory and the new modified algorithms, an improved S-type grouping method is proposed and the original way of roulette wheel selection is substituted through sensitivity-pheromone way. Then, an adaptive step with exponential functions is designed for replacing the original random step. Finally, based on the new test function versions CEC13, six benchmark functions with the dimensions D = 20 and D = 40 are chosen and applied in the experiments for analyzing and comparing the iteration speed and accuracy of the new modified algorithms. The experimental results show that the new modified algorithm has faster and more stable searching and can quickly increase poor population diversity and bring out the global optimal solutions. PMID:27293426
Averaging analysis for discrete time and sampled data adaptive systems
NASA Technical Reports Server (NTRS)
Fu, Li-Chen; Bai, Er-Wei; Sastry, Shankar S.
1986-01-01
Earlier continuous time averaging theorems are extended to the nonlinear discrete time case. Theorems for the study of the convergence analysis of discrete time adaptive identification and control systems are used. Instability theorems are also derived and used for the study of robust stability and instability of adaptive control schemes applied to sampled data systems. As a by product, the effects of sampling on unmodeled dynamics in continuous time systems are also studied.
Impact of space-time mesh adaptation on solute transport modeling in porous media
NASA Astrophysics Data System (ADS)
Esfandiar, Bahman; Porta, Giovanni; Perotto, Simona; Guadagnini, Alberto
2015-02-01
We implement a space-time grid adaptation procedure to efficiently improve the accuracy of numerical simulations of solute transport in porous media in the context of model parameter estimation. We focus on the Advection Dispersion Equation (ADE) for the interpretation of nonreactive transport experiments in laboratory-scale heterogeneous porous media. When compared to a numerical approximation based on a fixed space-time discretization, our approach is grounded on a joint automatic selection of the spatial grid and the time step to capture the main (space-time) system dynamics. Spatial mesh adaptation is driven by an anisotropic recovery-based error estimator which enables us to properly select the size, shape, and orientation of the mesh elements. Adaptation of the time step is performed through an ad hoc local reconstruction of the temporal derivative of the solution via a recovery-based approach. The impact of the proposed adaptation strategy on the ability to provide reliable estimates of the key parameters of an ADE model is assessed on the basis of experimental solute breakthrough data measured following tracer injection in a nonuniform porous system. Model calibration is performed in a Maximum Likelihood (ML) framework upon relying on the representation of the ADE solution through a generalized Polynomial Chaos Expansion (gPCE). Our results show that the proposed anisotropic space-time grid adaptation leads to ML parameter estimates and to model results of markedly improved quality when compared to classical inversion approaches based on a uniform space-time discretization.
Nonlinear time-series-based adaptive control applications
NASA Technical Reports Server (NTRS)
Mohler, R. R.; Rajkumar, V.; Zakrzewski, R. R.
1991-01-01
A control design methodology based on a nonlinear time-series reference model is presented. It is indicated by highly nonlinear simulations that such designs successfully stabilize troublesome aircraft maneuvers undergoing large changes in angle of attack as well as large electric power transients due to line faults. In both applications, the nonlinear controller was significantly better than the corresponding linear adaptive controller. For the electric power network, a flexible AC transmission system with series capacitor power feedback control is studied. A bilinear autoregressive moving average reference model is identified from system data, and the feedback control is manipulated according to a desired reference state. The control is optimized according to a predictive one-step quadratic performance index. A similar algorithm is derived for control of rapid changes in aircraft angle of attack over a normally unstable flight regime. In the latter case, however, a generalization of a bilinear time-series model reference includes quadratic and cubic terms in angle of attack.
Adaptive median filtering for preprocessing of time series measurements
NASA Technical Reports Server (NTRS)
Paunonen, Matti
1993-01-01
A median (L1-norm) filtering program using polynomials was developed. This program was used in automatic recycling data screening. Additionally, a special adaptive program to work with asymmetric distributions was developed. Examples of adaptive median filtering of satellite laser range observations and TV satellite time measurements are given. The program proved to be versatile and time saving in data screening of time series measurements.
Goldstein, Naomi E. S.; Kemp, Kathleen A.; Leff, Stephen S.; Lochman, John E.
2014-01-01
The use of manual-based interventions tends to improve client outcomes and promote replicability. With an increasingly strong link between funding and the use of empirically supported prevention and intervention programs, manual development and adaptation have become research priorities. As a result, researchers and scholars have generated guidelines for developing manuals from scratch, but there are no extant guidelines for adapting empirically supported, manualized prevention and intervention programs for use with new populations. Thus, this article proposes step-by-step guidelines for the manual adaptation process. It also describes two adaptations of an extensively researched anger management intervention to exemplify how an empirically supported program was systematically and efficiently adapted to achieve similar outcomes with vastly different populations in unique settings. PMID:25110403
Discrete-time adaptive control of robot manipulators
NASA Technical Reports Server (NTRS)
Tarokh, M.
1989-01-01
A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that asymptotic trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation.
McCrorie, P Rw; Duncan, E; Granat, M H; Stansfield, B W
2012-11-01
Evidence suggests that behaviours such as standing are beneficial for our health. Unfortunately, little is known of the prevalence of this state, its importance in relation to time spent stepping or variation across seasons. The aim of this study was to quantify, in young adolescents, the prevalence and seasonal changes in time spent upright and not stepping (UNSt(time)) as well as time spent upright and stepping (USt(time)), and their contribution to overall upright time (U(time)). Thirty-three adolescents (12.2 ± 0.3 y) wore the activPAL activity monitor during four school days on two occasions: November/December (winter) and May/June (summer). UNSt(time) contributed 60% of daily U(time) at winter (Mean = 196 min) and 53% at summer (Mean = 171 min); a significant seasonal effect, p < 0.001. USt(time) was significantly greater in summer compared to winter (153 min versus 131 min, p < 0.001). The effects in UNSt(time) could be explained through significant seasonal differences during the school hours (09:00-16:00), whereas the effects in USt(time) could be explained through significant seasonal differences in the evening period (16:00-22:00). Adolescents spent a greater amount of time upright and not stepping than they did stepping, in both winter and summer. The observed seasonal effects for both UNSt(time) and USt(time) provide important information for behaviour change intervention programs. PMID:23111187
Stochastic analysis of epidemics on adaptive time varying networks
NASA Astrophysics Data System (ADS)
Kotnis, Bhushan; Kuri, Joy
2013-06-01
Many studies investigating the effect of human social connectivity structures (networks) and human behavioral adaptations on the spread of infectious diseases have assumed either a static connectivity structure or a network which adapts itself in response to the epidemic (adaptive networks). However, human social connections are inherently dynamic or time varying. Furthermore, the spread of many infectious diseases occur on a time scale comparable to the time scale of the evolving network structure. Here we aim to quantify the effect of human behavioral adaptations on the spread of asymptomatic infectious diseases on time varying networks. We perform a full stochastic analysis using a continuous time Markov chain approach for calculating the outbreak probability, mean epidemic duration, epidemic reemergence probability, etc. Additionally, we use mean-field theory for calculating epidemic thresholds. Theoretical predictions are verified using extensive simulations. Our studies have uncovered the existence of an “adaptive threshold,” i.e., when the ratio of susceptibility (or infectivity) rate to recovery rate is below the threshold value, adaptive behavior can prevent the epidemic. However, if it is above the threshold, no amount of behavioral adaptations can prevent the epidemic. Our analyses suggest that the interaction patterns of the infected population play a major role in sustaining the epidemic. Our results have implications on epidemic containment policies, as awareness campaigns and human behavioral responses can be effective only if the interaction levels of the infected populace are kept in check.
One step at a time: endoplasmic reticulum-associated degradation
Vembar, Shruthi S.; Brodsky, Jeffrey L.
2009-01-01
Protein folding in the endoplasmic reticulum (ER) is monitored by ER quality control (ERQC) mechanisms. Proteins that pass ERQC criteria traffic to their final destinations through the secretory pathway, whereas non-native and unassembled subunits of multimeric proteins are degraded by the ER-associated degradation (ERAD) pathway. During ERAD, molecular chaperones and associated factors recognize and target substrates for retrotranslocation to the cytoplasm, where they are degraded by the ubiquitin–proteasome machinery. The discovery of diseases that are associated with ERAD substrates highlights the importance of this pathway. Here, we summarize our current understanding of each step during ERAD, with emphasis on the factors that catalyse distinct activities. PMID:19002207
Real-time adaptive aircraft scheduling
NASA Technical Reports Server (NTRS)
Kolitz, Stephan E.; Terrab, Mostafa
1990-01-01
One of the most important functions of any air traffic management system is the assignment of ground-holding times to flights, i.e., the determination of whether and by how much the take-off of a particular aircraft headed for a congested part of the air traffic control (ATC) system should be postponed in order to reduce the likelihood and extent of airborne delays. An analysis is presented for the fundamental case in which flights from many destinations must be scheduled for arrival at a single congested airport; the formulation is also useful in scheduling the landing of airborne flights within the extended terminal area. A set of approaches is described for addressing a deterministic and a probabilistic version of this problem. For the deterministic case, where airport capacities are known and fixed, several models were developed with associated low-order polynomial-time algorithms. For general delay cost functions, these algorithms find an optimal solution. Under a particular natural assumption regarding the delay cost function, an extremely fast (O(n ln n)) algorithm was developed. For the probabilistic case, using an estimated probability distribution of airport capacities, a model was developed with an associated low-order polynomial-time heuristic algorithm with useful properties.
Consensus time and conformity in the adaptive voter model
NASA Astrophysics Data System (ADS)
Rogers, Tim; Gross, Thilo
2013-09-01
The adaptive voter model is a paradigmatic model in the study of opinion formation. Here we propose an extension for this model, in which conflicts are resolved by obtaining another opinion, and analytically study the time required for consensus to emerge. Our results shed light on the rich phenomenology of both the original and extended adaptive voter models, including a dynamical phase transition in the scaling behavior of the mean time to consensus.
PIC Algorithm with Multiple Poisson Equation Solves During One Time Step
NASA Astrophysics Data System (ADS)
Ren, Junxue; Godar, Trenton; Menart, James; Mahalingam, Sudhakar; Choi, Yongjun; Loverich, John; Stoltz, Peter H.
2015-09-01
In order to reduce the overall computational time of a PIC (particle-in-cell) computer simulation, an attempt was made to utilize larger time step sizes by implementing multiple solutions of Poisson's equation within one time step. The hope was this would make the PIC simulation stable at larger time steps than an explicit technique can use, and using larger time steps would reduce the overall computational time, even though the computational time per time step would increase. A three-dimensional PIC code that tracks electrons and ions throughout a three-dimensional Cartesian computational domain is used to perform this study. The results of altering the number of times Poisson's equation is solved during a single time step are presented. Also, the size of the time that can be used and still maintain a stable solution is surveyed. The results indicate that using multiple Poisson solves during one time step provides some ability to use larger time steps in PIC simulations, but the increase in time step size is not significant and the overall simulation run time is not reduced
Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R.; Crittendon, Julie A.; Warren, Zachary E.; Sarkar, Nilanjan
2013-01-01
Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child’s head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831
Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R; Crittendon, Julie A; Warren, Zachary E; Sarkar, Nilanjan
2013-03-01
Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child's head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831
A novel online adaptive time delay identification technique
NASA Astrophysics Data System (ADS)
Bayrak, Alper; Tatlicioglu, Enver
2016-05-01
Time delay is a phenomenon which is common in signal processing, communication, control applications, etc. The special feature of time delay that makes it attractive is that it is a commonly faced problem in many systems. A literature search on time-delay identification highlights the fact that most studies focused on numerical solutions. In this study, a novel online adaptive time-delay identification technique is proposed. This technique is based on an adaptive update law through a minimum-maximum strategy which is firstly applied to time-delay identification. In the design of the adaptive identification law, Lyapunov-based stability analysis techniques are utilised. Several numerical simulations were conducted with Matlab/Simulink to evaluate the performance of the proposed technique. It is numerically demonstrated that the proposed technique works efficiently in identifying both constant and disturbed time delays, and is also robust to measurement noise.
Competencies for Part-Time Faculty--the First Step.
ERIC Educational Resources Information Center
Haddad, Margaret; Dickens, Mary Ellen
1978-01-01
Discusses hiring, evaluation, involvement, and competencies of the increasing number of part-time teachers in colleges throughout the country, and the unclear expectations placed on them. Includes a competencies questionnaire for part-time instructors developed at Caldwell Community College and Technical Institute.
Viral DNA Packaging: One Step at a Time
NASA Astrophysics Data System (ADS)
Bustamante, Carlos; Moffitt, Jeffrey R.
During its life-cycle the bacteriophage φ29 actively packages its dsDNA genome into a proteinacious capsid, compressing its genome to near crystalline densities against large electrostatic, elastic, and entropic forces. This remarkable process is accomplished by a nano-scale, molecular DNA pump - a complex assembly of three protein and nucleic acid rings which utilizes the free energy released in ATP hydrolysis to perform the mechanical work necessary to overcome these large energetic barriers. We have developed a single molecule optical tweezers assay which has allowed us to probe the detailed mechanism of this packaging motor. By following the rate of packaging of a single bacteriophage as the capsid is filled with genome and as a function of optically applied load, we find that the compression of the genome results in the build-up of an internal force, on the order of ˜ 55 pN, due to the compressed genome. The ability to work against such large forces makes the packaging motor one of the strongest known molecular motors. By titrating the concentration of ATP, ADP, and inorganic phosphate at different opposing load, we are able to determine features of the mechanochemistry of this motor - the coupling between the mechanical and chemical cycles. We find that force is generated not upon binding of ATP, but rather upon release of hydrolysis products. Finally, by improving the resolution of the optical tweezers assay, we are able to observe the discrete increments of DNA encapsidated each cycle of the packaging motor. We find that DNA is packaged in 10-bp increments preceded by the binding of multiple ATPs. The application of large external forces slows the packaging rate of the motor, revealing that the 10-bp steps are actually composed of four 2.5-bp steps which occur in rapid succession. These data show that the individual subunits of the pentameric ring-ATPase at the core of the packaging motor are highly coordinated, with the binding of ATP and the
Inherent robustness of discrete-time adaptive control systems
NASA Technical Reports Server (NTRS)
Ma, C. C. H.
1986-01-01
Global stability robustness with respect to unmodeled dynamics, arbitrary bounded internal noise, as well as external disturbance is shown to exist for a class of discrete-time adaptive control systems when the regressor vectors of these systems are persistently exciting. Although fast adaptation is definitely undesirable, so far as attaining the greatest amount of global stability robustness is concerned, slow adaptation is shown to be not necessarily beneficial. The entire analysis in this paper holds for systems with slowly varying return difference matrices; the plants in these systems need not be slowly varying.
Active movement restores veridical event-timing after tactile adaptation.
Tomassini, Alice; Gori, Monica; Burr, David; Sandini, Giulio; Morrone, Maria Concetta
2012-10-01
Growing evidence suggests that time in the subsecond range is tightly linked to sensory processing. Event-time can be distorted by sensory adaptation, and many temporal illusions can accompany action execution. In this study, we show that adaptation to tactile motion causes a strong contraction of the apparent duration of tactile stimuli. However, when subjects make a voluntary motor act before judging the duration, it annuls the adaptation-induced temporal distortion, reestablishing veridical event-time. The movement needs to be performed actively by the subject: passive movement of similar magnitude and dynamics has no effect on adaptation, showing that it is the motor commands themselves, rather than reafferent signals from body movement, which reset the adaptation for tactile duration. No other concomitant perceptual changes were reported (such as apparent speed or enhanced temporal discrimination), ruling out a generalized effect of body movement on somatosensory processing. We suggest that active movement resets timing mechanisms in preparation for the new scenario that the movement will cause, eliminating inappropriate biases in perceived time. Our brain seems to utilize the intention-to-move signals to retune its perceptual machinery appropriately, to prepare to extract new temporal information. PMID:22832572
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2009-01-01
We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Using Response Times for Item Selection in Adaptive Testing
ERIC Educational Resources Information Center
van der Linden, Wim J.
2008-01-01
Response times on items can be used to improve item selection in adaptive testing provided that a probabilistic model for their distribution is available. In this research, the author used a hierarchical modeling framework with separate first-level models for the responses and response times and a second-level model for the distribution of the…
Importance of variable time-step algorithms in spatial kinetics calculations
Aviles, B.N.
1994-12-31
The use of spatial kinetics codes in conjunction with advanced thermal-hydraulics codes is becoming more widespread as better methods and faster computers appear. The integrated code packages are being used for routine nuclear power plant design and analysis, including simulations with instrumentation and control systems initiating system perturbations such as rod motion and scrams. As a result, it is important to include a robust variable time-step algorithm that can accurately and efficiently follow widely varying plant neutronic behavior. This paper describes the variable time-step algorithm in SPANDEX and compares the automatic time-step scheme with a more traditional fixed time-step scheme.
IMPROVEMENTS TO THE TIME STEPPING ALGORITHM OF RELAP5-3D
Cumberland, R.; Mesina, G.
2009-01-01
The RELAP5-3D time step method is used to perform thermo-hydraulic and neutronic simulations of nuclear reactors and other devices. It discretizes time and space by numerically solving several differential equations. Previously, time step size was controlled by halving or doubling the size of a previous time step. This process caused the code to run slower than it potentially could. In this research project, the RELAP5-3D time step method was modifi ed to allow a new method of changing time steps to improve execution speed and to control error. The new RELAP5-3D time step method being studied involves making the time step proportional to the material courant limit (MCL), while insuring that the time step does not increase by more than a factor of two between advancements. As before, if a step fails or mass error is excessive, the time step is cut in half. To examine performance of the new method, a measure of run time and a measure of error were plotted against a changing MCL proportionality constant (m) in seven test cases. The removal of the upper time step limit produced a small increase in error, but a large decrease in execution time. The best value of m was found to be 0.9. The new algorithm is capable of producing a signifi cant increase in execution speed, with a relatively small increase in mass error. The improvements made are now under consideration for inclusion as a special option in the RELAP5-3D production code.
Two-stepping through time: mammals and viruses.
Meyerson, Nicholas R; Sawyer, Sara L
2011-06-01
Recent studies have identified ancient virus genomes preserved as fossils within diverse animal genomes. These fossils have led to the revelation that a broad range of mammalian virus families are older and more ubiquitous than previously appreciated. Long-term interactions between viruses and their hosts often develop into genetic arms races where both parties continually jockey for evolutionary dominance. It is difficult to imagine how mammalian hosts have kept pace in the evolutionary race against rapidly evolving viruses over large expanses of time, given their much slower evolutionary rates. However, recent data has begun to reveal the evolutionary strategy of slowly-evolving hosts. We review these data and suggest a modified arms race model where the evolutionary possibilities of viruses are relatively constrained. Such a model could allow more accurate forecasting of virus evolution. PMID:21531564
A discrete-time adaptive control scheme for robot manipulators
NASA Technical Reports Server (NTRS)
Tarokh, M.
1990-01-01
A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
Convergence Acceleration for Multistage Time-Stepping Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli L.; Rossow, C-C; Vasta, V. N.
2006-01-01
The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 could be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. Numerical dissipation operators (based on the Roe scheme, a matrix formulation, and the CUSP scheme) as well as the number of RK stages are considered in evaluating the RK/implicit scheme. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. In two dimensions, turbulent flows over an airfoil at subsonic and transonic conditions are computed. The effects of mesh cell aspect ratio on convergence are investigated for Reynolds numbers between 5.7 x 10(exp 6) and 100.0 x 10(exp 6). Results are also obtained for a transonic wing flow. For both 2-D and 3-D problems, the computational time of a well-tuned standard RK scheme is reduced at least a factor of four.
A Dynamic Era-Based Time-Symmetric Block Time-Step Algorithm with Parallel Implementations
NASA Astrophysics Data System (ADS)
Kaplan, Murat; Saygin, Hasan
2012-06-01
The time-symmetric block time-step (TSBTS) algorithm is a newly developed efficient scheme for N-body integrations. It is constructed on an era-based iteration. In this work, we re-designed the TSBTS integration scheme with a dynamically changing era size. A number of numerical tests were performed to show the importance of choosing the size of the era, especially for long-time integrations. Our second aim was to show that the TSBTS scheme is as suitable as previously known schemes for developing parallel N-body codes. In this work, we relied on a parallel scheme using the copy algorithm for the time-symmetric scheme. We implemented a hybrid of data and task parallelization for force calculation to handle load balancing problems that can appear in practice. Using the Plummer model initial conditions for different numbers of particles, we obtained the expected efficiency and speedup for a small number of particles. Although parallelization of the direct N-body codes is negatively affected by the communication/calculation ratios, we obtained good load-balanced results. Moreover, we were able to conserve the advantages of the algorithm (e.g., energy conservation for long-term simulations).
Time Adaptation Shows Duration Selectivity in the Human Parietal Cortex
Hayashi, Masamichi J.; Ditye, Thomas; Harada, Tokiko; Hashiguchi, Maho; Sadato, Norihiro; Carlson, Synnöve; Walsh, Vincent; Kanai, Ryota
2015-01-01
Although psychological and computational models of time estimation have postulated the existence of neural representations tuned for specific durations, empirical evidence of this notion has been lacking. Here, using a functional magnetic resonance imaging (fMRI) adaptation paradigm, we show that the inferior parietal lobule (IPL) (corresponding to the supramarginal gyrus) exhibited reduction in neural activity due to adaptation when a visual stimulus of the same duration was repeatedly presented. Adaptation was strongest when stimuli of identical durations were repeated, and it gradually decreased as the difference between the reference and test durations increased. This tuning property generalized across a broad range of durations, indicating the presence of general time-representation mechanisms in the IPL. Furthermore, adaptation was observed irrespective of the subject’s attention to time. Repetition of a nontemporal aspect of the stimulus (i.e., shape) did not produce neural adaptation in the IPL. These results provide neural evidence for duration-tuned representations in the human brain. PMID:26378440
The Effects of Predator Arrival Timing on Adaptive Radiation (Invited)
NASA Astrophysics Data System (ADS)
Borden, J.; Knope, M. L.; Fukami, T.
2009-12-01
Much of Earth’s biodiversity is thought to have arisen by adaptive radiation, the rapid diversification of a single ancestral species to fill a wide-variety of ecological niches. Both theory and empirical evidence have long supported competition for limited resources as a primary driver of adaptive radiation. While predation has also been postulated to be an important selective force during radiation, empirical evidence is surprisingly scant and its role remains controversial. However, two recent empirical studies suggest that predation can promote divergence during adaptive radiation. Using an experimental laboratory microcosm system, we examined how predator arrival timing affects the rate and extent of diversification during adaptive radiation. We varied the introduction timing of a protozoan predator (Tetrahymena thermophila) into populations of the bacteria Pseudomonas flourescens, which is known for its ability to undergo rapid adaptive radiation in aqueous microcosms. While our results show that predator arrival timing may have a significant impact on the rate, but not extent, of diversification, these results are tenuous and should be interpreted with caution, as the protozoan predators died early in the majority of our treatments, hampering our ability for comparison across treatments. Additionally, the abundance of newly derived bacterial genotypes was markedly lower in all treatments than observed in previous experiments utilizing this microbial experimental evolution system. To address these shortcomings, we will be repeating the experiment in the near future to further explore the impact of predator arrival timing on adaptive radiation. Smooth Morph and small-Wrinkly Spreader Pseudomonas flourescens diversification in the 96 hour treatment. Day 10, diluted to 1e-5.
ADAPTIVE DATA ANALYSIS OF COMPLEX FLUCTUATIONS IN PHYSIOLOGIC TIME SERIES
PENG, C.-K.; COSTA, MADALENA; GOLDBERGER, ARY L.
2009-01-01
We introduce a generic framework of dynamical complexity to understand and quantify fluctuations of physiologic time series. In particular, we discuss the importance of applying adaptive data analysis techniques, such as the empirical mode decomposition algorithm, to address the challenges of nonlinearity and nonstationarity that are typically exhibited in biological fluctuations. PMID:20041035
Ying, Wenjun; Henriquez, Craig S.
2015-01-01
A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented. PMID:26581455
Sensitivity of a thermodynamic sea ice model with leads to time step size
NASA Technical Reports Server (NTRS)
Ledley, T. S.
1985-01-01
The characteristics of sea ice models, developed to study the physics of the growth and melt of ice at the ocean surface and the variations in ice extent, depend on the size of the time step. Thus, to study longer-term variations within a reasonable computer budget, a model with a scheme allowing longer time steps has been constructed. However, the results produced by the model can definitely depend on the length of the time step. The sensitivity of a model to time-step size can be reduced by appropriate approaches. The present investigation is concerned with experiments which use a formulation of a lead parameterization that can be considered as a first step toward the development of a lead parameterization suitable for a use in long-term climate studies.
An Explicit Super-Time-Stepping Scheme for Non-Symmetric Parabolic Problems
NASA Astrophysics Data System (ADS)
Gurski, K. F.; O'Sullivan, S.
2010-09-01
Explicit numerical methods for the solution of a system of differential equations may suffer from a time step size that approaches zero in order to satisfy stability conditions. When the differential equations are dominated by a skew-symmetric component, the problem is that the real eigenvalues are dominated by imaginary eigenvalues. We compare results for stable time step limits for the super-time-stepping method of Alexiades, Amiez, and Gremaud (super-time-stepping methods belong to the Runge-Kutta-Chebyshev class) and a new method modeled on a predictor-corrector scheme with multiplicative operator splitting. This new explicit method increases stability of the original super-time-stepping whenever the skew-symmetric term is nonzero, which occurs in particular convection-diffusion problems and more generally when the iteration matrix represents a nonlinear operator. The new method is stable for skew symmetric dominated systems where the regular super-time-stepping scheme fails. This method is second order in time (may be increased by Richardson extrapolation) and the spatial order is determined by the user's choice of discretization scheme. We present a comparison between the two super-time-stepping methods to show the speed up available for any non-symmetric system using the nearly symmetric Black-Scholes equation as an example.
Multiple Steps to Activate FAK’s Kinase Domain: Adaptation to Confined Environments?
Herzog, Florian A.; Vogel, Viola
2013-01-01
Protein kinases regulate cell signaling by phosphorylating their substrates in response to environment-specific stimuli. Using molecular dynamics, we studied the catalytically active and inactive conformations of the kinase domain of the focal adhesion kinase (FAK), which are distinguished by displaying a structured or unstructured activation loop, respectively. Upon removal of an ATP analog, we show that the nucleotide-binding pocket in the catalytically active conformation is structurally unstable and fluctuates between an open and closed configuration. In contrast, the pocket remains open in the catalytically inactive form upon removal of an inhibitor from the pocket. Because temporal pocket closures will slow the ATP on-rate, these simulations suggest a multistep process in which the kinase domain is more likely to bind ATP in the catalytically inactive than in the active form. Transient closures of the ATP-binding pocket might allow FAK to slow down its catalytic cycle. These short cat naps could be adaptions to crowded or confined environments by giving the substrate sufficient time to diffuse away. The simulations show further how either the phosphorylation of the activation loop or the activating mutations of the so-called SuperFAK influence the electrostatic switch that controls kinase activity. PMID:23746525
An adaptive robust controller for time delay maglev transportation systems
NASA Astrophysics Data System (ADS)
Milani, Reza Hamidi; Zarabadipour, Hassan; Shahnazi, Reza
2012-12-01
For engineering systems, uncertainties and time delays are two important issues that must be considered in control design. Uncertainties are often encountered in various dynamical systems due to modeling errors, measurement noises, linearization and approximations. Time delays have always been among the most difficult problems encountered in process control. In practical applications of feedback control, time delay arises frequently and can severely degrade closed-loop system performance and in some cases, drives the system to instability. Therefore, stability analysis and controller synthesis for uncertain nonlinear time-delay systems are important both in theory and in practice and many analytical techniques have been developed using delay-dependent Lyapunov function. In the past decade the magnetic and levitation (maglev) transportation system as a new system with high functionality has been the focus of numerous studies. However, maglev transportation systems are highly nonlinear and thus designing controller for those are challenging. The main topic of this paper is to design an adaptive robust controller for maglev transportation systems with time-delay, parametric uncertainties and external disturbances. In this paper, an adaptive robust control (ARC) is designed for this purpose. It should be noted that the adaptive gain is derived from Lyapunov-Krasovskii synthesis method, therefore asymptotic stability is guaranteed.
NASA Astrophysics Data System (ADS)
Yin, Xiu-xing; Lin, Yong-gang; Li, Wei; Gu, Ya-jing; Lei, Peng-fei; Liu, Hong-wei
2015-11-01
A new electro-hydraulic pitch system is proposed to smooth the output power and drive-train torque fluctuations for wind turbine. This new pitch system employs a servo-valve-controlled hydraulic motor to enhance pitch control performances. This pitch system is represented by a state-space model with parametric uncertainties and nonlinearities. An adaptive back-stepping pitch angle controller is synthesised based on this state-space model to accurately achieve the desired pitch angle control regardless of such uncertainties and nonlinearities. This pitch angle controller includes a back-stepping procedure and an adaption law to deal with such uncertainties and nonlinearities and hence to improve the final pitch control performances. The proposed pitch system and the designed pitch angle controller have been validated for achievable and efficient power and torque regulation performances by comparative experimental results under various operating conditions.
2013-01-01
Background The standard clinical protocol of image-guided IMRT for prostate carcinoma introduces isocenter relocation to restore the conformity of the multi-leaf collimator (MLC) segments to the target as seen in the cone-beam CT on the day of treatment. The large interfractional deformations of the clinical target volume (CTV) still require introduction of safety margins which leads to undesirably high rectum toxicity. Here we present further results from the 2-Step IMRT method which generates adaptable prostate IMRT plans using Beam Eye View (BEV) and 3D information. Methods Intermediate/high-risk prostate carcinoma cases are treated using Simultaneous Integrated Boost at the Universitätsklinkum Würzburg (UKW). Based on the planning CT a CTV is defined as the prostate and the base of seminal vesicles. The CTV is expanded by 10 mm resulting in the PTV; the posterior margin is limited to 7 mm. The Boost is obtained by expanding the CTV by 5 mm, overlap with rectum is not allowed. Prescription doses to PTV and Boost are 60.1 and 74 Gy respectively given in 33 fractions. We analyse the geometry of the structures of interest (SOIs): PTV, Boost, and rectum, and generate 2-Step IMRT plans to deliver three fluence steps: conformal to the target SOIs (S0), sparing the rectum (S1), and narrow segments compensating the underdosage in the target SOIs due to the rectum sparing (S2). The width of S2 segments is calculated for every MLC leaf pair based on the target and rectum geometry in the corresponding CT layer to have best target coverage. The resulting segments are then fed into the DMPO optimizer of the Pinnacle treatment planning system for weight optimization and fine-tuning of the form, prior to final dose calculation using the collapsed cone algorithm. We adapt 2-Step IMRT plans to changed geometry whilst simultaneously preserving the number of initially planned Monitor Units (MU). The adaptation adds three further steps to the previous isocenter relocation: 1
Real time adaptive filtering for digital X-ray applications.
Bockenbach, Olivier; Mangin, Michel; Schuberth, Sebastian
2006-01-01
Over the last decade, many methods for adaptively filtering a data stream have been proposed. Those methods have applications in two dimensional imaging as well as in three dimensional image reconstruction. Although the primary objective of this filtering technique is to reduce the noise while avoiding to blur the edges, diagnostic, automated segmentation and surgery show a growing interest in enhancing the features contained in the image flow. Most of the methods proposed so far emerged from thorough studies of the physics of the considered modality and therefore show only a marginal capability to be extended across modalities. Moreover, adaptive filtering belongs to the family of processing intensive algorithms. Existing technology has often driven to simplifications and modality specific optimization to sustain the expected performances. In the specific case of real time digital X-ray as used surgery, the system has to sustain a throughput of 30 frames per second. In this study, we take a generalized approach for adaptive filtering based on multiple oriented filters. Mapping the filtering part to the embedded real time image processing while a user/application defined adaptive recombination of the filter outputs allow to change the smoothing and edge enhancement properties of the filter without changing the oriented filter parameters. We have implemented the filtering on a Cell Broadband Engine processor and the adaptive recombination on an off-the-shelf PC, connected via Gigabit Ethernet. This implementation is capable of filtering images of 5122 pixels at a throughput in excess of 40 frames per second while allowing to change the parameters in real time. PMID:17354937
NASA Astrophysics Data System (ADS)
Wang, Chenliang; Lin, Yan
2015-04-01
In this paper, an adaptive dynamic surface control scheme is proposed for a class of multi-input multi-output (MIMO) nonlinear time-varying systems. By fusing a bound estimation approach, a smooth function and a time-varying matrix factorisation, the obstacle caused by unknown time-varying parameters is circumvented. The proposed scheme is free of the problem of explosion of complexity and needs only one updated parameter at each design step. Moreover, all tracking errors can converge to predefined arbitrarily small residual sets with a prescribed convergence rate and maximum overshoot. Such features result in a simple adaptive controller which can be easily implemented in applications with less computational burden and satisfactory tracking performance. Simulation results are presented to illustrate the effectiveness of the proposed scheme.
From wavelets to adaptive approximations: time-frequency parametrization of EEG.
Durka, Piotr J
2003-01-01
This paper presents a summary of time-frequency analysis of the electrical activity of the brain (EEG). It covers in details two major steps: introduction of wavelets and adaptive approximations. Presented studies include time-frequency solutions to several standard research and clinical problems, encountered in analysis of evoked potentials, sleep EEG, epileptic activities, ERD/ERS and pharmaco-EEG. Based upon these results we conclude that the matching pursuit algorithm provides a unified parametrization of EEG, applicable in a variety of experimental and clinical setups. This conclusion is followed by a brief discussion of the current state of the mathematical and algorithmical aspects of adaptive time-frequency approximations of signals. PMID:12605721
Frequency Adaptability and Waveform Design for OFDM Radar Space-Time Adaptive Processing
Sen, Satyabrata; Glover, Charles Wayne
2012-01-01
We propose an adaptive waveform design technique for an orthogonal frequency division multiplexing (OFDM) radar signal employing a space-time adaptive processing (STAP) technique. We observe that there are inherent variabilities of the target and interference responses in the frequency domain. Therefore, the use of an OFDM signal can not only increase the frequency diversity of our system, but also improve the target detectability by adaptively modifying the OFDM coefficients in order to exploit the frequency-variabilities of the scenario. First, we formulate a realistic OFDM-STAP measurement model considering the sparse nature of the target and interference spectra in the spatio-temporal domain. Then, we show that the optimal STAP-filter weight-vector is equal to the generalized eigenvector corresponding to the minimum generalized eigenvalue of the interference and target covariance matrices. With numerical examples we demonstrate that the resultant OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.
Closed loop adaptive control of spectrum-producing step using neural networks
Fu, C.Y.
1998-11-24
Characteristics of the plasma in a plasma-based manufacturing process step are monitored directly and in real time by observing the spectrum which it produces. An artificial neural network analyzes the plasma spectrum and generates control signals to control one or more of the process input parameters in response to any deviation of the spectrum beyond a narrow range. In an embodiment, a plasma reaction chamber forms a plasma in response to input parameters such as gas flow, pressure and power. The chamber includes a window through which the electromagnetic spectrum produced by a plasma in the chamber, just above the subject surface, may be viewed. The spectrum is conducted to an optical spectrometer which measures the intensity of the incoming optical spectrum at different wavelengths. The output of optical spectrometer is provided to an analyzer which produces a plurality of error signals, each indicating whether a respective one of the input parameters to the chamber is to be increased or decreased. The microcontroller provides signals to control respective controls, but these lines are intercepted and first added to the error signals, before being provided to the controls for the chamber. The analyzer can include a neural network and an optional spectrum preprocessor to reduce background noise, as well as a comparator which compares the parameter values predicted by the neural network with a set of desired values provided by the microcontroller. 7 figs.
Closed loop adaptive control of spectrum-producing step using neural networks
Fu, Chi Yung
1998-01-01
Characteristics of the plasma in a plasma-based manufacturing process step are monitored directly and in real time by observing the spectrum which it produces. An artificial neural network analyzes the plasma spectrum and generates control signals to control one or more of the process input parameters in response to any deviation of the spectrum beyond a narrow range. In an embodiment, a plasma reaction chamber forms a plasma in response to input parameters such as gas flow, pressure and power. The chamber includes a window through which the electromagnetic spectrum produced by a plasma in the chamber, just above the subject surface, may be viewed. The spectrum is conducted to an optical spectrometer which measures the intensity of the incoming optical spectrum at different wavelengths. The output of optical spectrometer is provided to an analyzer which produces a plurality of error signals, each indicating whether a respective one of the input parameters to the chamber is to be increased or decreased. The microcontroller provides signals to control respective controls, but these lines are intercepted and first added to the error signals, before being provided to the controls for the chamber. The analyzer can include a neural network and an optional spectrum preprocessor to reduce background noise, as well as a comparator which compares the parameter values predicted by the neural network with a set of desired values provided by the microcontroller.
Discrete-time minimal control synthesis adaptive algorithm
NASA Astrophysics Data System (ADS)
di Bernardo, M.; di Gennaro, F.; Olm, J. M.; Santini, S.
2010-12-01
This article proposes a discrete-time Minimal Control Synthesis (MCS) algorithm for a class of single-input single-output discrete-time systems written in controllable canonical form. As it happens with the continuous-time MCS strategy, the algorithm arises from the family of hyperstability-based discrete-time model reference adaptive controllers introduced in (Landau, Y. (1979), Adaptive Control: The Model Reference Approach, New York: Marcel Dekker, Inc.) and is able to ensure tracking of the states of a given reference model with minimal knowledge about the plant. The control design shows robustness to parameter uncertainties, slow parameter variation and matched disturbances. Furthermore, it is proved that the proposed discrete-time MCS algorithm can be used to control discretised continuous-time plants with the same performance features. Contrary to previous discrete-time implementations of the continuous-time MCS algorithm, here a formal proof of asymptotic stability is given for generic n-dimensional plants in controllable canonical form. The theoretical approach is validated by means of simulation results.
Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.
Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga
2015-11-01
Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection. PMID:26627804
Shiu, Cheng-Shi; Chen, Wei-Ti; Simoni, Jane; Fredriksen-Goldsen, Karen; Zhang, Fujie; Zhou, Hongxin
2013-01-01
China is considered to be the new frontier of the global AIDS pandemic. Although effective treatment for HIV is becoming widely available in China, adherence to treatment remains a challenge. This study aimed to adapt an intervention promoting HIV-medication adherence—favorably evaluated in the West—for Chinese HIV-positive patients. The adaptation process was theory-driven and covered several key issues of cultural adaptation. We considered the importance of interpersonal relationships and family in China and cultural notions of health. Using an evidence-based treatment protocol originally designed for Western HIV-positive patients, we developed an 11-step Chinese Life-Steps program with an additional culture-specific intervention option. We describe in detail how the cultural elements were incorporated into the intervention and put into practice at each stage. Clinical considerations are also outlined and followed by two case examples that are provided to illustrate our application of the intervention. Finally, we discuss practical and research issues and limitations emerging from our field experiments in a HIV clinic in Beijing. The intervention was tailored to address both universal and culturally specific barriers to adherence and is readily applicable to generalized clinical settings. This evidence-based intervention provides a case example of the process of adapting behavioral interventions to culturally diverse communities with limited resources. PMID:23667305
Adaptive Sensing of Time Series with Application to Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.; Cabrol, Nathalie A.; Furlong, Michael; Hardgrove, Craig; Low, Bryan K. H.; Moersch, Jeffrey; Wettergreen, David
2013-01-01
We address the problem of adaptive informationoptimal data collection in time series. Here a remote sensor or explorer agent throttles its sampling rate in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility -- all collected datapoints lie in the past, but its resource allocation decisions require predicting far into the future. Our solution is to continually fit a Gaussian process model to the latest data and optimize the sampling plan on line to maximize information gain. We compare the performance characteristics of stationary and nonstationary Gaussian process models. We also describe an application based on geologic analysis during planetary rover exploration. Here adaptive sampling can improve coverage of localized anomalies and potentially benefit mission science yield of long autonomous traverses.
Sparse time-frequency decomposition based on dictionary adaptation.
Hou, Thomas Y; Shi, Zuoqiang
2016-04-13
In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. PMID:26953172
Stability analysis and time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D. Warsa, James S. Lowrie, Robert B.
2010-05-20
A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.
Robustness via Run-Time Adaptation of Contingent Plans
NASA Technical Reports Server (NTRS)
Bresina, John L.; Washington, Richard; Norvig, Peter (Technical Monitor)
2000-01-01
In this paper, we discuss our approach to making the behavior of planetary rovers more robust for the purpose of increased productivity. Due to the inherent uncertainty in rover exploration, the traditional approach to rover control is conservative, limiting the autonomous operation of the rover and sacrificing performance for safety. Our objective is to increase the science productivity possible within a single uplink by allowing the rover's behavior to be specified with flexible, contingent plans and by employing dynamic plan adaptation during execution. We have deployed a system exhibiting flexible, contingent execution; this paper concentrates on our ongoing efforts on plan adaptation, Plans can be revised in two ways: plan steps may be deleted, with execution continuing with the plan suffix; and the current plan may be merged with an "alternate plan" from an on-board library. The plan revision action is chosen to maximize the expected utility of the plan. Plan merging and action deletion constitute a more conservative general-purpose planning system; in return, our approach is more efficient and more easily verified, two important criteria for deployed rovers.
Stance time and step width variability have unique contributing impairments in older persons.
Brach, Jennifer S; Studenski, Stephanie; Perera, Subashan; VanSwearingen, Jessie M; Newman, Anne B
2008-04-01
Gait variability may have multiple causes. We hypothesized that central nervous system (CNS) impairments would affect motor control and be manifested as increased stance time and step length variability, while sensory impairments would affect balance and be manifested as increased step width variability. Older adults (mean+/-standard deviation (S.D.) age=79.4+/-4.1, n=558) from the Pittsburgh site of the Cardiovascular Health Study participated. The S.D. across steps was the indicator of gait variability, determined for three gait measures, step length, stance time and step width, using a computerized walkway. Impairment measures included CNS function (modified mini-mental state examination, Trails A and B, Digit Symbol Substitution, finger tapping), sensory function (lower extremity (LE) vibration, vision), strength (grip strength, repeated chair stands), mood, and LE pain. Linear regression models were fit for the three gait variability characteristics using impairment measures as independent variables, adjusted for age, race, gender, and height. Analyses were repeated stratified by gait speed. All measures of CNS impairment were directly related to stance time variability (p<0.01), with increased CNS impairment associated with increased stance time variability. CNS impairments were not related to step length or width variability. Both sensory impairments were inversely related to step width (p<0.01) but not step length or stance time variability. CNS impairments affected stance time variability especially in slow walkers while sensory impairments affected step width variability in fast walkers. Specific patterns of gait variability may imply different underlying causes. Types of gait variability should be specified. Interventions may be targeted at specific types of gait variability. PMID:17632004
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
The First Steps of Adaptation of Escherichia coli to the Gut Are Dominated by Soft Sweeps
Lourenço, Marta; Bergman, Marie-Louise; Sobral, Daniel; Demengeot, Jocelyne; Xavier, Karina B.; Gordo, Isabel
2014-01-01
The accumulation of adaptive mutations is essential for survival in novel environments. However, in clonal populations with a high mutational supply, the power of natural selection is expected to be limited. This is due to clonal interference - the competition of clones carrying different beneficial mutations - which leads to the loss of many small effect mutations and fixation of large effect ones. If interference is abundant, then mechanisms for horizontal transfer of genes, which allow the immediate combination of beneficial alleles in a single background, are expected to evolve. However, the relevance of interference in natural complex environments, such as the gut, is poorly known. To address this issue, we have developed an experimental system which allows to uncover the nature of the adaptive process as Escherichia coli adapts to the mouse gut. This system shows the invasion of beneficial mutations in the bacterial populations and demonstrates the pervasiveness of clonal interference. The observed dynamics of change in frequency of beneficial mutations are consistent with soft sweeps, where different adaptive mutations with similar phenotypes, arise repeatedly on different haplotypes without reaching fixation. Despite the complexity of this ecosystem, the genetic basis of the adaptive mutations revealed a striking parallelism in independently evolving populations. This was mainly characterized by the insertion of transposable elements in both coding and regulatory regions of a few genes. Interestingly, in most populations we observed a complete phenotypic sweep without loss of genetic variation. The intense clonal interference during adaptation to the gut environment, here demonstrated, may be important for our understanding of the levels of strain diversity of E. coli inhabiting the human gut microbiota and of its recombination rate. PMID:24603313
Real-time Adaptive Control Using Neural Generalized Predictive Control
NASA Technical Reports Server (NTRS)
Haley, Pam; Soloway, Don; Gold, Brian
1999-01-01
The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.
Adaptive Sampling of Time Series During Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.
2012-01-01
This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models
Modeling solute transport in distribution networks with variable demand and time step sizes.
Peyton, Chad E.; Bilisoly, Roger Lee; Buchberger, Steven G.; McKenna, Sean Andrew; Yarrington, Lane
2004-06-01
The effect of variable demands at short time scales on the transport of a solute through a water distribution network has not previously been studied. We simulate flow and transport in a small water distribution network using EPANET to explore the effect of variable demand on solute transport across a range of hydraulic time step scales from 1 minute to 2 hours. We show that variable demands at short time scales can have the following effects: smoothing of a pulse of tracer injected into a distribution network and increasing the variability of both the transport pathway and transport timing through the network. Variable demands are simulated for these different time step sizes using a previously developed Poisson rectangular pulse (PRP) demand generator that considers demand at a node to be a combination of exponentially distributed arrival times with log-normally distributed intensities and durations. Solute is introduced at a tank and at three different network nodes and concentrations are modeled through the system using the Lagrangian transport scheme within EPANET. The transport equations within EPANET assume perfect mixing of the solute within a parcel of water and therefore physical dispersion cannot occur. However, variation in demands along the solute transport path contribute to both removal and distortion of the injected pulse. The model performance measures examined are the distribution of the Reynolds number, the variation in the center of mass of the solute across time, and the transport path and timing of the solute through the network. Variation in all three performance measures is greatest at the shortest time step sizes. As the scale of the time step increases, the variability in these performance measures decreases. The largest time steps produce results that are inconsistent with the results produced by the smaller time steps.
Adaptive control of systems with unknown time delays
NASA Astrophysics Data System (ADS)
Nelson, James P.
Control systems, on earth or in outer-space, may exhibit time delays in their dynamic behavior. Aerospace control systems must be able to operate in the presence of time delays both internal to the system and in its inputs and outputs. These delays are often introduced via systems controlled through a network, by information, energy or mass transport phenomena, but can also be caused by computer processing time or by the accumulation of time lags in a number of simple dynamic systems connected in series. When a dynamic system is subject to a time delay, unlike other parameters, this affects the temporal characteristics of the system and exact control over system operation cannot be strictly implemented. Systems with significant time delays are difficult to control using standard feedback controllers. The United States Air Force Research Laboratory (AFRL) is considering the use of router-based data networks on-board next generation satellites and in decentralized control architectures. This approach has the potential to introduce non-constant and non-deterministic communications delays into feedback control loops that make use of these data networks. The desire for rapid deployment of new spacecraft architectures will also introduce many other control issues as the rigorous measurement, calibration and performance tests usually conducted on spacecraft systems to develop a highly precise dynamic model will need to be drastically shortened due to the desired abbreviated build and launch schedule. Due to limited testing and system identification, the spacecraft model will have uncertainties/perturbations from the actual plant. This will require a controller that can robustly control the non-linear dynamic model with limited plant knowledge. The problems created by the control of time delay systems and the limited plant knowledge nature of the systems of interest leads us to the concept of adaptive control. Adaptive control makes adjustment of the controllers
NASA Astrophysics Data System (ADS)
Huang, Yu
Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.
Halsey, Lewis G; Watkins, David A R; Duggan, Brendan M
2012-01-01
Stairway climbing provides a ubiquitous and inconspicuous method of burning calories. While typically two strategies are employed for climbing stairs, climbing one stair step per stride or two steps per stride, research to date has not clarified if there are any differences in energy expenditure between them. Fourteen participants took part in two stair climbing trials whereby measures of heart rate were used to estimate energy expenditure during stairway ascent at speeds chosen by the participants. The relationship between rate of oxygen consumption ([Formula: see text]) and heart rate was calibrated for each participant using an inclined treadmill. The trials involved climbing up and down a 14.05 m high stairway, either ascending one step per stride or ascending two stair steps per stride. Single-step climbing used 8.5±0.1 kcal min(-1), whereas double step climbing used 9.2±0.1 kcal min(-1). These estimations are similar to equivalent measures in all previous studies, which have all directly measured [Formula: see text] The present study findings indicate that (1) treadmill-calibrated heart rate recordings can be used as a valid alternative to respirometry to ascertain rate of energy expenditure during stair climbing; (2) two step climbing invokes a higher rate of energy expenditure; however, one step climbing is energetically more expensive in total over the entirety of a stairway. Therefore to expend the maximum number of calories when climbing a set of stairs the single-step strategy is better. PMID:23251455
Halsey, Lewis G.; Watkins, David A. R.; Duggan, Brendan M.
2012-01-01
Stairway climbing provides a ubiquitous and inconspicuous method of burning calories. While typically two strategies are employed for climbing stairs, climbing one stair step per stride or two steps per stride, research to date has not clarified if there are any differences in energy expenditure between them. Fourteen participants took part in two stair climbing trials whereby measures of heart rate were used to estimate energy expenditure during stairway ascent at speeds chosen by the participants. The relationship between rate of oxygen consumption () and heart rate was calibrated for each participant using an inclined treadmill. The trials involved climbing up and down a 14.05 m high stairway, either ascending one step per stride or ascending two stair steps per stride. Single-step climbing used 8.5±0.1 kcal min−1, whereas double step climbing used 9.2±0.1 kcal min−1. These estimations are similar to equivalent measures in all previous studies, which have all directly measured The present study findings indicate that (1) treadmill-calibrated heart rate recordings can be used as a valid alternative to respirometry to ascertain rate of energy expenditure during stair climbing; (2) two step climbing invokes a higher rate of energy expenditure; however, one step climbing is energetically more expensive in total over the entirety of a stairway. Therefore to expend the maximum number of calories when climbing a set of stairs the single-step strategy is better. PMID:23251455
First-Step Mutations during Adaptation Restore the Expression of Hundreds of Genes
Rodríguez-Verdugo, Alejandra; Tenaillon, Olivier; Gaut, Brandon S.
2016-01-01
The temporal change of phenotypes during the adaptive process remains largely unexplored, as do the genetic changes that affect these phenotypic changes. Here we focused on three mutations that rose to high frequency in the early stages of adaptation within 12 Escherichia coli populations subjected to thermal stress (42 °C). All the mutations were in the rpoB gene, which encodes the RNA polymerase beta subunit. For each mutation, we measured the growth curves and gene expression (mRNAseq) of clones at 42 °C. We also compared growth and gene expression with their ancestor under unstressed (37 °C) and stressed conditions (42 °C). Each of the three mutations changed the expression of hundreds of genes and conferred large fitness advantages, apparently through the restoration of global gene expression from the stressed toward the prestressed state. These three mutations had a similar effect on gene expression as another single mutation in a distinct domain of the rpoB protein. Finally, we compared the phenotypic characteristics of one mutant, I572L, with two high-temperature adapted clones that have this mutation plus additional background mutations. The background mutations increased fitness, but they did not substantially change gene expression. We conclude that early mutations in a global transcriptional regulator cause extensive changes in gene expression, many of which are likely under positive selection for their effect in restoring the prestress physiology. PMID:26500250
Adaptive time-frequency parametrization of epileptic spikes
NASA Astrophysics Data System (ADS)
Durka, Piotr J.
2004-05-01
Adaptive time-frequency approximations of signals have proven to be a valuable tool in electroencephalogram (EEG) analysis and research, where it is believed that oscillatory phenomena play a crucial role in the brain’s information processing. This paper extends this paradigm to the nonoscillating structures such as the epileptic EEG spikes, and presents the advantages of their parametrization in general terms such as amplitude and half-width. A simple detector of epileptic spikes in the space of these parameters, tested on a limited data set, gives very promising results. It also provides a direct distinction between randomly occurring spikes or spike/wave complexes and rhythmic discharges.
Modified Chebyshev pseudospectral method with O(N exp -1) time step restriction
NASA Technical Reports Server (NTRS)
Kosloff, Dan; Tal-Ezer, Hillel
1989-01-01
The extreme eigenvalues of the Chebyshev pseudospectral differentiation operator are O(N exp 2) where N is the number of grid points. As a result of this, the allowable time step in an explicit time marching algorithm is O(N exp -2) which, in many cases, is much below the time step dictated by the physics of the partial differential equation. A new set of interpolating points is introduced such that the eigenvalues of the differentiation operator are O(N) and the allowable time step is O(N exp -1). The properties of the new algorithm are similar to those of the Fourier method. The new algorithm also provides a highly accurate solution for non-periodic boundary value problems.
2010-01-01
Background Research questionnaires are not always translated appropriately before they are used in new temporal, cultural or linguistic settings. The results based on such instruments may therefore not accurately reflect what they are supposed to measure. This paper aims to illustrate the process and required steps involved in the cross-cultural adaptation of a research instrument using the adaptation process of an attitudinal instrument as an example. Methods A questionnaire was needed for the implementation of a study in Norway 2007. There was no appropriate instruments available in Norwegian, thus an Australian-English instrument was cross-culturally adapted. Results The adaptation process included investigation of conceptual and item equivalence. Two forward and two back-translations were synthesized and compared by an expert committee. Thereafter the instrument was pretested and adjusted accordingly. The final questionnaire was administered to opioid maintenance treatment staff (n=140) and harm reduction staff (n=180). The overall response rate was 84%. The original instrument failed confirmatory analysis. Instead a new two-factor scale was identified and found valid in the new setting. Conclusions The failure of the original scale highlights the importance of adapting instruments to current research settings. It also emphasizes the importance of ensuring that concepts within an instrument are equal between the original and target language, time and context. If the described stages in the cross-cultural adaptation process had been omitted, the findings would have been misleading, even if presented with apparent precision. Thus, it is important to consider possible barriers when making a direct comparison between different nations, cultures and times. PMID:20144247
Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul
2013-01-01
Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format. PMID:23685876
2015-01-01
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts. PMID:24555448
McClarren, Ryan G. Urbatsch, Todd J.
2009-09-01
In this paper we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method can avoid the nonphysical overheating that occurs in standard IMC when the time step is large. The method also leads to decreased noise in the material temperature at the cost of a potential increase in the radiation temperature noise.
Ejupi, Andreas; Brodie, Matthew; Gschwind, Yves J; Schoene, Daniel; Lord, Stephen; Delbaere, Kim
2014-01-01
Accidental falls remain an important problem in older people. Stepping is a common task to avoid a fall and requires good interplay between sensory functions, central processing and motor execution. Increased choice stepping reaction time has been associated with recurrent falls in older people. The aim of this study was to examine if a sensor-based Exergame Choice Stepping Reaction Time test can successfully discriminate older fallers from non-fallers. The stepping test was conducted in a cohort of 104 community-dwelling older people (mean age: 80.7 ± 7.0 years). Participants were asked to step laterally as quickly as possible after a light stimulus appeared on a TV screen. Spatial and temporal measurements of the lower and upper body were derived from a low-cost and portable 3D-depth sensor (i.e. Microsoft Kinect) and 3D-accelerometer. Fallers had a slower stepping reaction time (970 ± 228 ms vs. 858 ± 123 ms, P = 0.001) and a slower reaction of their upper body (719 ± 289 ms vs. 631 ± 166 ms, P = 0.052) compared to non-fallers. It took fallers significantly longer than non-fallers to recover their balance after initiating the step (2147 ± 800 ms vs. 1841 ± 591 ms, P = 0.029). This study demonstrated that a sensor-based, low-cost and easy to administer stepping test, with the potential to be used in clinical practice or regular unsupervised home assessments, was able to identify significant differences between performances by fallers and non-fallers. PMID:25571596
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2008-01-01
Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
Muscle contraction: the step-size distance and the impulse-time per ATP.
Worthington, C R; Elliott, G F
1996-02-01
We derive the step-size distance, and the impulse time per ATP split, from a consideration of Hill's energy rate equation coupled with the enthalpy available per ATP split. This definition of step-size distance is model-independent, and is calculated to have a maximum of 17 A at no load and to reduce to zero at isometric tension, since it will depend on the velocity of shortening. We revisit a derivation of Hill's force-velocity equation depending on impulsive forces working against frictional forces and show that this gives a physical meaning to Hill's constants a and b. This is particularly elegant for Hill's constant b, which is directly related to the impulse time; the value of this impulse time is 1/2 ms. The question that muscle contraction may involve overlapping interactions is considered. However, we find that the step-size distance is not dependent on the possibility of overlapping interactions. PMID:8852761
NASA Astrophysics Data System (ADS)
Wang, Yue
A new variable grid-size and time-step finite-difference (FD) method is developed and applied to three different geophysical problems: simulation of tube waves in boreholes, three-dimensional (3-D) ground-motion simulation in sedimentary basin models, and reverse-time migration of multicomponent data. Unlike the conventional FD method, which uses a fixed grid-size and time-step for the entire model region, spatially variable grid-sizes and time-steps are used to achieve the optimal computational efficiency. For tube wave simulations, a fine grid-spacing is used for simulation inside the borehole region, while a coarse grid is used in the exterior region. While the stability condition requires a very fine time step for the fine grid, a variable time-step method provides coarse time steps for simulation in the coarse grid. Variable grid-size and time-step changes are used to achieve both accuracy and efficiency in the simulations. Numerical tests are performed for the Bayou Choctaw salt-flank model with different borehole models. The results show the important borehole effects on the seismic wavefield for a realistic source bandwidth. The combination of variable grid-size and time-step methods reduces computational costs by several orders of magnitude for the borehole models. Viscoelastic 3-D simulations are performed for a three-layer Salt Lake basin model. The near-surface unconsolidated layer is modeled with a fine grid, and the deep part of the model is modeled by a coarse grid. Simulation results show that the 3-D basin features and the shallow layer significantly affect the amplitude and duration time of the ground motion. In the elastic case, the approximation by 2-D modeling is insufficient to simulate the 3-D ground motion response. A basin model without a shallow low-velocity layer underestimates the ground motion duration and cumulative kinetic energy by 50% or more. The simulation of a Bingham Mine blast suggests that a lower S-velocity should be used to
NASA Technical Reports Server (NTRS)
Jameson, A.; Schmidt, Wolfgang; Turkel, Eli
1981-01-01
A new combination of a finite volume discretization in conjunction with carefully designed dissipative terms of third order, and a Runge Kutta time stepping scheme, is shown to yield an effective method for solving the Euler equations in arbitrary geometric domains. The method has been used to determine the steady transonic flow past an airfoil using an O mesh. Convergence to a steady state is accelerated by the use of a variable time step determined by the local Courant member, and the introduction of a forcing term proportional to the difference between the local total enthalpy and its free stream value.
Real-Time Feedback Control of Flow-Induced Cavity Tones. Part 2; Adaptive Control
NASA Technical Reports Server (NTRS)
Kegerise, M. A.; Cabell, R. H.; Cattafesta, L. N., III
2006-01-01
An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. Past input-output data and an estimate of the open-loop pulse response sequence are all that is needed to implement the algorithm for application at fixed Mach numbers. Transient measurements made during controller adaptation revealed that the controller coefficients converged to a steady state in the mean, and this implies that adaptation can be turned off at some point with no degradation in control performance. When converged, the control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. However, as in the case of fixed-gain GPC, the adaptive GPC performance was limited by spillover in sidebands around the suppressed Rossiter modes. The algorithm was also able to maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Beyond this range, stable operation of the control algorithm was not possible due to the fixed plant model in the algorithm.
NASA Astrophysics Data System (ADS)
Serbezov, Valery; Sotirov, Sotir
2013-03-01
A novel approach for one-step synthesis of hybrid inorganic-organic nanocomposite coatings by new modification of Pulsed Laser Deposition technology called Laser Adaptive Ablation Deposition (LAAD) is presented. Hybrid nanocomposite coatings including Mg- Rapamycin and Mg- Desoximetasone were produced by UV TEA N2 laser under low vacuum (0.1 Pa) and room temperature onto substrates from SS 316L, KCl and NaCl. The laser fluence for Mg alloy was 1, 8 J/cm2 and for Desoximetasone 0,176 J/cm2 and for Rapamycin 0,118 J/cm2 were respectively. The threedimensional two-segmented single target was used to adapt the interaction of focused laser beam with inorganic and organic material. Magnesium alloy nanoparticles with sizes from 50 nm to 250 nm were obtained in organic matrices. The morphology of nanocomposites films were studied by Bright field / Fluorescence optical microscope and Scanning Electron Microscope (SEM). Fourier Transform Infrared (FTIR) spectroscopy measurements were applied in order to study the functional properties of organic component before and after the LAAD process. Energy Dispersive X-ray Spectroscopy (EDX) was used for identification of Mg alloy presence in hybrid nanocomposites coatings. The precise control of process parameters and particularly of the laser fluence adjustment enables transfer on materials with different physical chemical properties and one-step synthesis of complex inorganic- organic nanocomposites coatings.
Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing
NASA Technical Reports Server (NTRS)
Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric
2016-01-01
This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.
A step in time: Changes in standard-frequency and time-signal broadcasts, 1 January 1972
NASA Technical Reports Server (NTRS)
Chi, A. R.; Fosque, H. S.
1973-01-01
An improved coordinated universal time (UTC) system has been adopted by the International Radio Consultative Committee. It was implemented internationally by the standard-frequency and time-broadcast stations on 1 Jan. 1972. The new UTC system eliminates the frequency offset of 300 parts in 10 to the 10th power between the old UTC and atomic time, thus making the broadcast time interval (the UTC second) constant and defined by the resonant frequency of cesium atoms. The new time scale is kept in synchronism with the rotation of the Earth within plus or minus 0.7 s by step-time adjustments of exactly 1 s, when needed. A time code has been added to the disseminated time signals to permit universal time to be obtained from the broadcasts to the nearest 0.1 s for users requiring such precision. The texts of the International Radio Consultative Committee recommendation and report to implement the new UTC system are given. The coding formats used by various standard time broadcast services to transmit the difference between the universal time (UT1) and the UTC are also given. For users' convenience, worldwide primary VLF and HF transmissions stations, frequencies, and schedules of time emissions are also included. Actual time-step adjustments made by various stations on 1 Jan. 1972, are provided for future reference.
NASA Astrophysics Data System (ADS)
Karimi, S.; Nakshatrala, K. B.
2014-12-01
Advection-Diffusion-Reaction (ADR) equations play a crucial role in simulating numerous geo- physical phenomena. It is well-known that the solution to these equations exhibit disparate spatial and temporal scales. These mathematical scales occur due to relative dominance of either advec- tion, diffusion, or reaction processes. Hence, in a careful simulation, one has to choose appropriate time-integrators, time-steps, and numerical formulations for spatial discretization. Multi-time-step coupling methods allow specific choice of integration methods (either temporal or spatial) in dif- ferent regions of the spatial domain. In recent years, most of the attempts to design monolithic multi-time-step frameworks favored second-order transient systems in structural dynamics. In this presentation, we will introduce monolithic multi-time-step computational frameworks for ADR equations. These methods are based on the theory of differential/algebraic equations. We shall also provide an overview of results from stability analysis, study of drift from compatibility con- straints, and analysis of influence of perturbations. Several benchmark problems will be utilized to demonstrate the theoretical findings and features of the proposed frameworks. Finally, application of the proposed methods to fast bimolecular reactive systems will be shown.
Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas
Cohen, B I; Dimits, A; Friedman, A; Caflisch, R
2009-10-29
The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.
Hill-Briggs, Felicia; Schumann, Kristina P.; Dike, Ogechi
2012-01-01
Background In the setting of declining U.S. literacy, new policies include use of clear communication and low literacy accessibility practices with all patients. Reliable methods for adapting health information to meet such criteria remain a pressing need. Objectives To report method validation (Study 1) and method replication (Study 2) procedures and outcomes for a 5-step method for evaluating and adapting print health information to meet the current low literacy criterion of <5th grade readability. Materials Sets of 18 and 11 publicly-disseminated patient education documents developed by a university-affiliated medical center. Measures Three low-literacy criteria were strategically targeted for efficient, systematic evaluation and text modification to meet a <5th grade reading level: sentence length <15 words, writing in active voice, and use of common words with multisyllabic words (>2–3 syllables) minimized or avoided. Inter-rater reliability for the document evaluations was determined. Results Training in the methodology resulted in inter-rater reliability of 0.99–1.00 in Study 1 and 0.98–1.00 in Study 2. Original documents met none of the targeted low literacy criteria. In Study 1, following low-literacy adaptation, mean reading grade level decreased from 10.4±1.8 to 3.8±0.6 (p<0.0001), with consistent achievement of criteria for words per sentence, passive voice, and syllables per word. Study 2 demonstrated similar achievement of all target criteria, with a resulting decrease in mean reading grade level from 11.0±1.8 to 4.6±0.3 (p < 0.0001). Conclusions The 5-step methodology proved teachable and efficient. Targeting a limited set of modifiable criteria was effective and reliable in achieving <5th grade readability. PMID:22354210
Chen, Zhao; Wang, Hongye; Jiang, Xiuping
2015-02-01
The effectiveness of a two-step heat treatment for eliminating desiccation-adapted Salmonella spp. in aged chicken litter was evaluated. The aged chicken litter with 20, 30, 40, and 50% moisture contents was inoculated with a mixture of four Salmonella serotypes for a 24-h adaptation. Afterwards, the inoculated chicken litter was added into the chicken litter with the adjusted moisture content for a 1-h moist-heat treatment at 65 °C and 100% relative humidity inside a water bath, followed by a dry-heat treatment in a convection oven at 85 °C for 1 h to the desired moisture level (<10-12%). After moist-heat treatment, the populations of Salmonella in aged chicken litter at 20 and 30% moisture contents declined from ≈6.70 log colony-forming units (CFU)/g to 3.31 and 3.00 log CFU/g, respectively. After subsequent 1-h dry-heat treatment, the populations further decreased to 2.97 and 2.57 log CFU/g, respectively. Salmonella cells in chicken litter with 40% and 50% moisture contents were only detectable by enrichment after 40 and 20 min of moist-heat treatment, respectively. Moisture contents in all samples were reduced to <10% after a 1-h dry-heat process. Our results demonstrated that the two-step heat treatment was effective in reducing >5.5 logs of desiccation-adapted Salmonella in aged chicken litter with moisture content at or above 40%. Clearly, the findings from this study may provide the chicken litter processing industry with an effective heat treatment method for producing Salmonella-free chicken litter. PMID:25405539
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
Simulating diffusion processes in discontinuous media: A numerical scheme with constant time steps
Lejay, Antoine; Pichot, Geraldine
2012-08-30
In this article, we propose new Monte Carlo techniques for moving a diffusive particle in a discontinuous media. In this framework, we characterize the stochastic process that governs the positions of the particle. The key tool is the reduction of the process to a Skew Brownian motion (SBM). In a zone where the coefficients are locally constant on each side of the discontinuity, the new position of the particle after a constant time step is sampled from the exact distribution of the SBM process at the considered time. To do so, we propose two different but equivalent algorithms: a two-steps simulation with a stop at the discontinuity and a one-step direct simulation of the SBM dynamic. Some benchmark tests illustrate their effectiveness.
Adaptive spatial combining for passive time-reversed communications.
Gomes, João; Silva, António; Jesus, Sérgio
2008-08-01
Passive time reversal has aroused considerable interest in underwater communications as a computationally inexpensive means of mitigating the intersymbol interference introduced by the channel using a receiver array. In this paper the basic technique is extended by adaptively weighting sensor contributions to partially compensate for degraded focusing due to mismatch between the assumed and actual medium impulse responses. Two algorithms are proposed, one of which restores constructive interference between sensors, and the other one minimizes the output residual as in widely used equalization schemes. These are compared with plain time reversal and variants that employ postequalization and channel tracking. They are shown to improve the residual error and temporal stability of basic time reversal with very little added complexity. Results are presented for data collected in a passive time-reversal experiment that was conducted during the MREA'04 sea trial. In that experiment a single acoustic projector generated a 24-PSK (phase-shift keyed) stream at 200400 baud, modulated at 3.6 kHz, and received at a range of about 2 km on a sparse vertical array with eight hydrophones. The data were found to exhibit significant Doppler scaling, and a resampling-based preprocessing method is also proposed here to compensate for that scaling. PMID:18681595
Breniere, Y; Ribreau, C
1998-10-01
In order to analyze the influence of gravity and body characteristics on the control of center of mass (CM) oscillations in stepping in place, equations of motion in oscillating systems were developed using a double-inverted pendulum model which accounts for both the head-arms-trunk (HAT) segment and the two-legged system. The principal goal of this work is to propose an equivalent model which makes use of the usual anthropometric data for the human body, in order to study the ability of postural control to adapt to the step frequency in this particular paradigm of human gait. This model allows the computation of CM-to-CP amplitude ratios, when the center of foot pressure (CP) oscillates, as a parametric function of the stepping in place frequency, whose parameters are gravity and major body characteristics. Motion analysis from a force plate was used to test the model by comparing experimental and simulated values of variations of the CM-to-CP amplitude ratio in the frontal plane versus the frequency. With data from the literature, the model is used to calculate the intersegmental torque which stabilizes the HAT when the Leg segment is subjected to a harmonic torque with an imposed frequency. PMID:9830708
NASA Astrophysics Data System (ADS)
Ramadan, Omar
2014-12-01
Systematic split-step finite difference time domain (SS-FDTD) formulations, based on the general Lie-Trotter-Suzuki product formula, are presented for solving the time-dependent Maxwell equations in double-dispersive electromagnetic materials. The proposed formulations provide a unified tool for constructing a family of unconditionally stable algorithms such as the first order split-step FDTD (SS1-FDTD), the second order split-step FDTD (SS2-FDTD), and the second order alternating direction implicit FDTD (ADI-FDTD) schemes. The theoretical stability of the formulations is included and it has been demonstrated that the formulations are unconditionally stable by construction. Furthermore, the dispersion relation of the formulations is derived and it has been found that the proposed formulations are best suited for those applications where a high space resolution is needed. Two-dimensional (2-D) and 3-D numerical examples are included and it has been observed that the SS1-FDTD scheme is computationally more efficient than the ADI-FDTD counterpart, while maintaining approximately the same numerical accuracy. Moreover, the SS2-FDTD scheme allows using larger time step than the SS1-FDTD or ADI-FDTD and therefore necessitates less CPU time, while giving approximately the same numerical accuracy.
Suggestions for CAP-TSD mesh and time-step input parameters
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1991-01-01
Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.
Dependence of Hurricane intensity and structures on vertical resolution and time-step size
NASA Astrophysics Data System (ADS)
Zhang, Da-Lin; Wang, Xiaoxue
2003-09-01
In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.
Causal-Path Local Time-Stepping in the discontinuous Galerkin method for Maxwell's equations
NASA Astrophysics Data System (ADS)
Angulo, L. D.; Alvarez, J.; Teixeira, F. L.; Pantoja, M. F.; Garcia, S. G.
2014-01-01
We introduce a novel local time-stepping technique for marching-in-time algorithms. The technique is denoted as Causal-Path Local Time-Stepping (CPLTS) and it is applied for two time integration techniques: fourth-order low-storage explicit Runge-Kutta (LSERK4) and second-order Leap-Frog (LF2). The CPLTS method is applied to evolve Maxwell's curl equations using a Discontinuous Galerkin (DG) scheme for the spatial discretization. Numerical results for LF2 and LSERK4 are compared with analytical solutions and the Montseny's LF2 technique. The results show that the CPLTS technique improves the dispersive and dissipative properties of LF2-LTS scheme.
Development of a variable time-step transient NEW code: SPANDEX
Aviles, B.N. )
1993-01-01
This paper describes a three-dimensional, variable time-step transient multigroup diffusion theory code, SPANDEX (space-time nodal expansion method). SPANDEX is based on the static nodal expansion method (NEM) code, NODEX (Ref. 1), and employs a nonlinear algorithm and a fifth-order expansion of the transverse-integrated fluxes. The time integration scheme in SPANDEX is a fourth-order implicit generalized Runge-Kutta method (GRK) with on-line error control and variable time-step selection. This Runge-Kutta method has been applied previously to point kinetics and one-dimensional finite difference transient analysis. This paper describes the application of the Runge-Kutta method to three-dimensional reactor transient analysis in a multigroup NEM code.
Promoting rest using a quiet time innovation in an adult neuroscience step down unit.
Bergner, Tara
2014-01-01
Sleep and rest are fundamental for the restoration of energy needed to recuperate from illness, trauma and surgery. At present hospitals are too noisy to promote rest for patients. A literature search produced research that described how quiet time interventions addressing noise levels have met with positive patient and staff satisfaction, as well as creating a more peaceful and healing environment. In this paper, a description of the importance of quiet time and how a small butfeasible innovation was carried out in an adult neuroscience step down unit in a large tertiary health care facility in Canada is provided. Anecdotal evidence from patients, families, and staff suggests that quiet time may have positive effects for patients, their families, and the adult neuroscience step down unit staff Future research examining the effect of quiet time on patient, family and staff satisfaction and patient healing is necessary. PMID:25638912
Two-step adaptive extraction method for ground points and breaklines from lidar point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Huang, Ronggang; Dong, Zhen; Zang, Yufu; Li, Jianping
2016-09-01
The extraction of ground points and breaklines is a crucial step during generation of high quality digital elevation models (DEMs) from airborne LiDAR point clouds. In this study, we propose a novel automated method for this task. To overcome the disadvantages of applying a single filtering method in areas with various types of terrain, the proposed method first classifies the points into a set of segments and one set of individual points, which are filtered by segment-based filtering and multi-scale morphological filtering, respectively. In the process of multi-scale morphological filtering, the proposed method removes amorphous objects from the set of individual points to decrease the effect of the maximum scale on the filtering result. The proposed method then extracts the breaklines from the ground points, which provide a good foundation for generation of a high quality DEM. Finally, the experimental results demonstrate that the proposed method extracts ground points in a robust manner while preserving the breaklines.
NASA Astrophysics Data System (ADS)
Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny
2015-03-01
We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3-4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations.
Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny
2015-01-01
We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3–4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations. PMID:25641983
An implicit time-stepping scheme for rigid body dynamics with Coulomb friction
STEWART,DAVID; TRINKLE,JEFFREY C.
2000-02-15
In this paper a new time-stepping method for simulating systems of rigid bodies is given. Unlike methods which take an instantaneous point of view, the method is based on impulse-momentum equations, and so does not need to explicitly resolve impulsive forces. On the other hand, the method is distinct from previous impulsive methods in that it does not require explicit collision checking and it can handle simultaneous impacts. Numerical results are given for one planar and one three-dimensional example, which demonstrate the practicality of the method, and its convergence as the step size becomes small.
Finn, John M.
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less
Evaluating mallard adaptive management models with time series
Conn, P.B.; Kendall, W.L.
2004-01-01
Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these
Real-Time Adaptive Color Segmentation by Neural Networks
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2004-01-01
Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural
Inertial stochastic dynamics. I. Long-time-step methods for Langevin dynamics
NASA Astrophysics Data System (ADS)
Beard, Daniel A.; Schlick, Tamar
2000-05-01
Two algorithms are presented for integrating the Langevin dynamics equation with long numerical time steps while treating the mass terms as finite. The development of these methods is motivated by the need for accurate methods for simulating slow processes in polymer systems such as two-site intermolecular distances in supercoiled DNA, which evolve over the time scale of milliseconds. Our new approaches refine the common Brownian dynamics (BD) scheme, which approximates the Langevin equation in the highly damped diffusive limit. Our LTID ("long-time-step inertial dynamics") method is based on an eigenmode decomposition of the friction tensor. The less costly integrator IBD ("inertial Brownian dynamics") modifies the usual BD algorithm by the addition of a mass-dependent correction term. To validate the methods, we evaluate the accuracy of LTID and IBD and compare their behavior to that of BD for the simple example of a harmonic oscillator. We find that the LTID method produces the expected correlation structure for Langevin dynamics regardless of the level of damping. In fact, LTID is the only consistent method among the three, with error vanishing as the time step approaches zero. In contrast, BD is accurate only for highly overdamped systems. For cases of moderate overdamping, and for the appropriate choice of time step, IBD is significantly more accurate than BD. IBD is also less computationally expensive than LTID (though both are the same order of complexity as BD), and thus can be applied to simulate systems of size and time scale ranges previously accessible to only the usual BD approach. Such simulations are discussed in our companion paper, for long DNA molecules modeled as wormlike chains.
ROAMing terrain (Real-time Optimally Adapting Meshes)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.; Miller, M.C.; Aldrich, C.; Mineev, M.
1997-07-01
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.
Augmenting synthetic aperture radar with space time adaptive processing
NASA Astrophysics Data System (ADS)
Riedl, Michael; Potter, Lee C.; Ertin, Emre
2013-05-01
Wide-area persistent radar video offers the ability to track moving targets. A shortcoming of the current technology is an inability to maintain track when Doppler shift places moving target returns co-located with strong clutter. Further, the high down-link data rate required for wide-area imaging presents a stringent system bottleneck. We present a multi-channel approach to augment the synthetic aperture radar (SAR) modality with space time adaptive processing (STAP) while constraining the down-link data rate to that of a single antenna SAR system. To this end, we adopt a multiple transmit, single receive (MISO) architecture. A frequency division design for orthogonal transmit waveforms is presented; the approach maintains coherence on clutter, achieves the maximal unaliased band of radial velocities, retains full resolution SAR images, and requires no increase in receiver data rate vis-a-vis the wide-area SAR modality. For Nt transmit antennas and N samples per pulse, the enhanced sensing provides a STAP capability with Nt times larger range bins than the SAR mode, at the cost of O(log N) more computations per pulse. The proposed MISO system and the associated signal processing are detailed, and the approach is numerically demonstrated via simulation of an airborne X-band system.
Adaptive multimode signal reconstruction from time-frequency representations.
Meignen, Sylvain; Oberlin, Thomas; Depalle, Philippe; Flandrin, Patrick; McLaughlin, Stephen
2016-04-13
This paper discusses methods for the adaptive reconstruction of the modes of multicomponent AM-FM signals by their time-frequency (TF) representation derived from their short-time Fourier transform (STFT). The STFT of an AM-FM component or mode spreads the information relative to that mode in the TF plane around curves commonly called ridges. An alternative view is to consider a mode as a particular TF domain termed a basin of attraction. Here we discuss two new approaches to mode reconstruction. The first determines the ridge associated with a mode by considering the location where the direction of the reassignment vector sharply changes, the technique used to determine the basin of attraction being directly derived from that used for ridge extraction. A second uses the fact that the STFT of a signal is fully characterized by its zeros (and then the particular distribution of these zeros for Gaussian noise) to deduce an algorithm to compute the mode domains. For both techniques, mode reconstruction is then carried out by simply integrating the information inside these basins of attraction or domains. PMID:26953184
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
The Semi-implicit Time-stepping Algorithm in MH4D
NASA Astrophysics Data System (ADS)
Vadlamani, Srinath; Shumlak, Uri; Marklin, George; Meier, Eric; Lionello, Roberto
2006-10-01
The Plasma Science and Innovation Center (PSI Center) at the University of Washington is developing MHD codes to accurately model Emerging Concept (EC) devices. Examination of the semi-implicit time stepping algorithm implemented in the tetrahedral mesh MHD simulation code, MH4D, is presented. The time steps for standard explicit methods, which are constrained by the Courant-Friedrichs-Lewy (CFL) condition, are typically small for simulations of EC experiments due to the large Alfven speed. The CFL constraint is more severe with a tetrahedral mesh because of the irregular cell geometry. The semi-implicit algorithm [1] removes the fast waves constraint, thus allowing for larger time steps. We will present the implementation method of this algorithm, and numerical results for test problems in simple geometry. Also, we will present the effectiveness in simulations of complex geometry, similar to the ZaP [2] experiment at the University of Washington. References: [1]Douglas S. Harned and D. D. Schnack, Semi-implicit method for long time scale magnetohy drodynamic computations in three dimensions, JCP, Volume 65, Issue 1, July 1986, Pages 57-70. [2]U. Shumlak, B. A. Nelson, R. P. Golingo, S. L. Jackson, E. A. Crawford, and D. J. Den Hartog, Sheared flow stabilization experiments in the ZaP flow Zpinch, Phys. Plasmas 10, 1683 (2003).
Error correction in short time steps during the application of quantum gates
NASA Astrophysics Data System (ADS)
de Castro, L. A.; Napolitano, R. d. J.
2016-04-01
We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.
NASA Astrophysics Data System (ADS)
Cavalcanti, José Rafael; Dumbser, Michael; Motta-Marques, David da; Fragoso Junior, Carlos Ruberto
2015-12-01
In this article we propose a new conservative high resolution TVD (total variation diminishing) finite volume scheme with time-accurate local time stepping (LTS) on unstructured grids for the solution of scalar transport problems, which are typical in the context of water quality simulations. To keep the presentation of the new method as simple as possible, the algorithm is only derived in two space dimensions and for purely convective transport problems, hence neglecting diffusion and reaction terms. The new numerical method for the solution of the scalar transport is directly coupled to the hydrodynamic model of Casulli and Walters (2000) that provides the dynamics of the free surface and the velocity vector field based on a semi-implicit discretization of the shallow water equations. Wetting and drying is handled rigorously by the nonlinear algorithm proposed by Casulli (2009). The new time-accurate LTS algorithm allows a different time step size for each element of the unstructured grid, based on an element-local Courant-Friedrichs-Lewy (CFL) stability condition. The proposed method does not need any synchronization between different time steps of different elements and is by construction locally and globally conservative. The LTS scheme is based on a piecewise linear polynomial reconstruction in space-time using the MUSCL-Hancock method, to obtain second order of accuracy in both space and time. The new algorithm is first validated on some classical test cases for pure advection problems, for which exact solutions are known. In all cases we obtain a very good level of accuracy, showing also numerical convergence results; we furthermore confirm mass conservation up to machine precision and observe an improved computational efficiency compared to a standard second order TVD scheme for scalar transport with global time stepping (GTS). Then, the new LTS method is applied to some more complex problems, where the new scalar transport scheme has also been coupled to
Personality traits, future time perspective and adaptive behavior in adolescence.
Gomes Carvalho, Renato Gil; Novo, Rosa Ferreira
2015-01-01
Several studies provide evidence of the importance of future time perspective (FTP) for individual success. However, little research addresses the relationship between FTP and personality traits, particularly if FTP can mediate their influence on behavior. In this study we analyze the mediating of FTP in the influence of personality traits on the way adolescents live their life at school. Sample consisted in 351 students, aged from 14 to 18 years-old, at different schooling levels. Instruments were the Portuguese version of the MMPI-A, particularly the PSY-5 dimensions (Aggressiveness, Psychoticism, Disconstraint, Neuroticism, Introversion), a FTP questionnaire, and a survey on school life, involving several indicators of achievement, social integration, and overall satisfaction. With the exception of Neuroticism, the results show significant mediation effects (p < .001) of FTP on most relationships between PSY-5 dimensions and school life variables. Concerning Disconstraint, FTP mediated its influence on overall satisfaction (β = -.125) and school achievement (β = -.106). In the case of Introversion, significant mediation effects occurred for interpersonal difficulties (β = .099) and participation in extracurricular activities (β = -.085). FTP was also a mediator of Psychoticism influence in overall satisfaction (β = -.094), interpersonal difficulties (β = .057), and behavior problems (β = .037). Finally, FTP mediated the influence of Aggressiveness on overall satisfaction (β = -.061), interpersonal difficulties (β = .040), achievement (β = -.052), and behavior problems (β = .023). Results are discussed considering the importance of FTP in the impact of some personality structural characteristics in students' school adaptation. PMID:25907852
Adaptive real-time dual-comb spectroscopy.
Ideguchi, Takuro; Poisson, Antonin; Guelachvili, Guy; Picqué, Nathalie; Hänsch, Theodor W
2014-01-01
The spectrum of a laser frequency comb consists of several hundred thousand equally spaced lines over a broad spectral bandwidth. Such frequency combs have revolutionized optical frequency metrology and they now hold much promise for significant advances in a growing number of applications including molecular spectroscopy. Despite an intriguing potential for the measurement of molecular spectra spanning tens of nanometres within tens of microseconds at Doppler-limited resolution, the development of dual-comb spectroscopy is hindered by the demanding stability requirements of the laser combs. Here we overcome this difficulty and experimentally demonstrate a concept of real-time dual-comb spectroscopy, which compensates for laser instabilities by electronic signal processing. It only uses free-running mode-locked lasers without any phase-lock electronics. We record spectra spanning the full bandwidth of near-infrared fibre lasers with Doppler-limited line profiles highly suitable for measurements of concentrations or line intensities. Our new technique of adaptive dual-comb spectroscopy offers a powerful transdisciplinary instrument for analytical sciences. PMID:24572636
Adaptive real-time dual-comb spectroscopy
Ideguchi, Takuro; Poisson, Antonin; Guelachvili, Guy; Picqué, Nathalie; Hänsch, Theodor W.
2014-01-01
The spectrum of a laser frequency comb consists of several hundred thousand equally spaced lines over a broad spectral bandwidth. Such frequency combs have revolutionized optical frequency metrology and they now hold much promise for significant advances in a growing number of applications including molecular spectroscopy. Despite an intriguing potential for the measurement of molecular spectra spanning tens of nanometres within tens of microseconds at Doppler-limited resolution, the development of dual-comb spectroscopy is hindered by the demanding stability requirements of the laser combs. Here we overcome this difficulty and experimentally demonstrate a concept of real-time dual-comb spectroscopy, which compensates for laser instabilities by electronic signal processing. It only uses free-running mode-locked lasers without any phase-lock electronics. We record spectra spanning the full bandwidth of near-infrared fibre lasers with Doppler-limited line profiles highly suitable for measurements of concentrations or line intensities. Our new technique of adaptive dual-comb spectroscopy offers a powerful transdisciplinary instrument for analytical sciences. PMID:24572636
Adaptive real-time dual-comb spectroscopy
NASA Astrophysics Data System (ADS)
Ideguchi, Takuro; Poisson, Antonin; Guelachvili, Guy; Picqué, Nathalie; Hänsch, Theodor W.
2014-02-01
The spectrum of a laser frequency comb consists of several hundred thousand equally spaced lines over a broad spectral bandwidth. Such frequency combs have revolutionized optical frequency metrology and they now hold much promise for significant advances in a growing number of applications including molecular spectroscopy. Despite an intriguing potential for the measurement of molecular spectra spanning tens of nanometres within tens of microseconds at Doppler-limited resolution, the development of dual-comb spectroscopy is hindered by the demanding stability requirements of the laser combs. Here we overcome this difficulty and experimentally demonstrate a concept of real-time dual-comb spectroscopy, which compensates for laser instabilities by electronic signal processing. It only uses free-running mode-locked lasers without any phase-lock electronics. We record spectra spanning the full bandwidth of near-infrared fibre lasers with Doppler-limited line profiles highly suitable for measurements of concentrations or line intensities. Our new technique of adaptive dual-comb spectroscopy offers a powerful transdisciplinary instrument for analytical sciences.
Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît
2016-04-12
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826
Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît
2016-01-01
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826
Toward fast feature adaptation and localization for real-time face recognition systems
NASA Astrophysics Data System (ADS)
Zuo, Fei; de With, Peter H.
2003-06-01
In a home environment, video surveillance employing face detection and recognition is attractive for new applications. Facial feature (e.g. eyes and mouth) localization in the face is an essential task for face recognition because it constitutes an indispensable step for face geometry normalization. This paper presents a new and efficient feature localization approach for real-time personal surveillance applications with low-quality images. The proposed approach consists of three major steps: (1) self-adaptive iris tracing, which is preceded by a trace-point selection process with multiple initializations to overcome the local convergence problem, (2) eye structure verification using an eye template with limited deformation freedom, and (3) eye-pair selection based on a combination of metrics. We have tested our facial feature localization method on about 100 randomly selected face images from the AR database and 30 face images downloaded from the Internet. The results show that our approach achieves a correct detection rate of 96%. Since our eye-selection technique does not involve time-consuming deformation processes, it yields relatively fast processing. The proposed algorithm has been successfully applied to a real-time home video surveillance system and proven to be an effective and computationally efficient face normalization method preceding the face recognition.
A novel adaptive, real-time algorithm to detect gait events from wearable sensors.
Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona
2015-05-01
A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices. PMID:25069118
Real-Time, Single-Step Bioassay Using Nanoplasmonic Resonator With Ultra-High Sensitivity
NASA Technical Reports Server (NTRS)
Zhang, Xiang (Inventor); Ellman, Jonathan A. (Inventor); Chen, Fanqing Frank (Inventor); Su, Kai-Hang (Inventor); Wei, Qi-Huo (Inventor); Sun, Cheng (Inventor)
2014-01-01
A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.
Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity
Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng
2014-04-01
A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.
Moment tensor inversion of waveforms: a two-step time-frequency approach
NASA Astrophysics Data System (ADS)
Vavryčuk, Václav; Kühn, Daniela
2012-09-01
We present a moment tensor inversion of waveforms, which is more robust and yields more stable and more accurate results than standard approaches. The inversion is performed in two steps and combines inversions in time and frequency domains. First, the inversion for the source-time function is performed in the frequency domain using complex spectra. Second, the time-domain inversion for the moment tensor is performed using the source-time function calculated in the first step. In this way, we can consider a realistic, complex source-time function and still keep the final moment tensor inversion linear. Using numerical modelling, we compare the efficiency and accuracy of the proposed approach with standard waveform inversions. We study the sensitivity of the retrieved double-couple and non-double-couple components of the moment tensors to noise in the data, to inaccuracies of the location and of the velocity model, and to the type of the focal mechanism. Finally, the proposed moment tensor inversion is tested on real data observed in a complex 3-D inhomogeneous geological environment: a production blast and a rockburst in the Pyhäsalmi ore mine, Finland.
Olsen, Jeffrey R.; Noel, Camille E.; Baker, Kenneth; Santanam, Lakshmi; Michalski, Jeff M.; Parikh, Parag J.
2012-04-01
Purpose: We have created an automated process using real-time tracking data to evaluate the adequacy of planning target volume (PTV) margins in prostate cancer, allowing a process of adaptive radiotherapy with minimal physician workload. We present an analysis of PTV adequacy and a proposed adaptive process. Methods and Materials: Tracking data were analyzed for 15 patients who underwent step-and-shoot multi-leaf collimation (SMLC) intensity-modulated radiation therapy (IMRT) with uniform 5-mm PTV margins for prostate cancer using the Calypso Registered-Sign Localization System. Additional plans were generated with 0- and 3-mm margins. A custom software application using the planned dose distribution and structure location from computed tomography (CT) simulation was developed to evaluate the dosimetric impact to the target due to motion. The dose delivered to the prostate was calculated for the initial three, five, and 10 fractions, and for the entire treatment. Treatment was accepted as adequate if the minimum delivered prostate dose (D{sub min}) was at least 98% of the planned D{sub min}. Results: For 0-, 3-, and 5-mm PTV margins, adequate treatment was obtained in 3 of 15, 12 of 15, and 15 of 15 patients, and the delivered D{sub min} ranged from 78% to 99%, 96% to 100%, and 99% to 100% of the planned D{sub min}. Changes in D{sub min} did not correlate with magnitude of prostate motion. Treatment adequacy during the first 10 fractions predicted sufficient dose delivery for the entire treatment for all patients and margins. Conclusions: Our adaptive process successfully used real-time tracking data to predict the need for PTV modifications, without the added burden of physician contouring and image analysis. Our methods are applicable to other uses of real-time tracking, including hypofractionated treatment.
Optimal Control Modification Adaptive Law for Time-Scale Separated Systems
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
Recently a new optimal control modification has been introduced that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. This modification is based on an optimal control formulation to minimize the L2 norm of the tracking error. The optimal control modification adaptive law results in a stable adaptation in the presence of a large adaptive gain. This study examines the optimal control modification adaptive law in the context of a system with a time scale separation resulting from a fast plant with a slow actuator. A singular perturbation analysis is performed to derive a modification to the adaptive law by transforming the original system into a reduced-order system in slow time. A model matching conditions in the transformed time coordinate results in an increase in the actuator command that effectively compensate for the slow actuator dynamics. Simulations demonstrate effectiveness of the method.
Real-Time Adaptive Control of Flow-Induced Cavity Tones
NASA Technical Reports Server (NTRS)
Kegerise, Michael A.; Cabell, Randolph H.; Cattafesta, Louis N.
2004-01-01
An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. The adaptive control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. The algorithm was also able t o maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Controller performance was evaluated with a measure of output disturbance rejection and an input sensitivity transfer function. The results suggest that disturbances entering the cavity flow are colocated with the control input at the cavity leading edge. In that case, only tonal components of the cavity wall-pressure fluctuations can be suppressed and arbitrary broadband pressure reduction is not possible. In the control-algorithm development, the cavity dynamics are treated as linear and time invariant (LTI) for a fixed Mach number. The experimental results lend support this treatment.
Hashemi, Mahnaz; Ghaisari, Jafar; Askari, Javad
2015-07-01
This paper investigates an adaptive controller for a class of Multi Input Multi Output (MIMO) nonlinear systems with unknown parameters, bounded time delays and in the presence of unknown time varying actuator failures. The type of considered actuator failure is one in which some inputs may be stuck at some time varying values where the values, times and patterns of the failures are unknown. The proposed approach is constructed based on a backstepping design method. The boundedness of all the closed-loop signals is guaranteed and the tracking errors are proved to converge to a small neighborhood of the origin. The proposed approach is employed for a double inverted pendulums benchmark and a chemical reactor system. The simulation results show the effectiveness of the proposed method. PMID:25792517
NASA Astrophysics Data System (ADS)
Yu, Chunxue; Yin, Xin'an; Yang, Zhifeng; Cai, Yanpeng; Sun, Tao
2016-09-01
The time step used in the operation of eco-friendly reservoirs has decreased from monthly to daily, and even sub-daily. The shorter time step is considered a better choice for satisfying downstream environmental requirements because it more closely resembles the natural flow regime. However, little consideration has been given to the influence of different time steps on the ability to simultaneously meet human and environmental flow requirements. To analyze this influence, we used an optimization model to explore the relationships among the time step, environmental flow (e-flow) requirements, and human water needs for a wide range of time steps and e-flow scenarios. We used the degree of hydrologic alteration to evaluate the regime's ability to satisfy the e-flow requirements of riverine ecosystems, and used water supply reliability to evaluate the ability to satisfy human needs. We then applied the model to a case study of China's Tanghe Reservoir. We found four efficient time steps (2, 3, 4, and 5 days), with a remarkably high water supply reliability (around 80%) and a low alteration of the flow regime (<35%). Our analysis of the hydrologic alteration revealed the smallest alteration at time steps ranging from 1 to 7 days. However, longer time steps led to higher water supply reliability to meet human needs under several e-flow scenarios. Our results show that adjusting the time step is a simple way to improve reservoir operation performance to balance human and e-flow needs.
NASA Astrophysics Data System (ADS)
Gonthier, Jérôme F.; Corminboeuf, Clémence
2014-04-01
Non-covalent interactions occur between and within all molecules and have a profound impact on structural and electronic phenomena in chemistry, biology, and material science. Understanding the nature of inter- and intramolecular interactions is essential not only for establishing the relation between structure and properties, but also for facilitating the rational design of molecules with targeted properties. These objectives have motivated the development of theoretical schemes decomposing intermolecular interactions into physically meaningful terms. Among the various existing energy decomposition schemes, Symmetry-Adapted Perturbation Theory (SAPT) is one of the most successful as it naturally decomposes the interaction energy into physical and intuitive terms. Unfortunately, analogous approaches for intramolecular energies are theoretically highly challenging and virtually nonexistent. Here, we introduce a zeroth-order wavefunction and energy, which represent the first step toward the development of an intramolecular variant of the SAPT formalism. The proposed energy expression is based on the Chemical Hamiltonian Approach (CHA), which relies upon an asymmetric interpretation of the electronic integrals. The orbitals are optimized with a non-hermitian Fock matrix based on two variants: one using orbitals strictly localized on individual fragments and the other using canonical (delocalized) orbitals. The zeroth-order wavefunction and energy expression are validated on a series of prototypical systems. The computed intramolecular interaction energies demonstrate that our approach combining the CHA with strictly localized orbitals achieves reasonable interaction energies and basis set dependence in addition to producing intuitive energy trends. Our zeroth-order wavefunction is the primary step fundamental to the derivation of any perturbation theory correction, which has the potential to truly transform our understanding and quantification of non
Gonthier, Jérôme F.; Corminboeuf, Clémence
2014-04-21
Non-covalent interactions occur between and within all molecules and have a profound impact on structural and electronic phenomena in chemistry, biology, and material science. Understanding the nature of inter- and intramolecular interactions is essential not only for establishing the relation between structure and properties, but also for facilitating the rational design of molecules with targeted properties. These objectives have motivated the development of theoretical schemes decomposing intermolecular interactions into physically meaningful terms. Among the various existing energy decomposition schemes, Symmetry-Adapted Perturbation Theory (SAPT) is one of the most successful as it naturally decomposes the interaction energy into physical and intuitive terms. Unfortunately, analogous approaches for intramolecular energies are theoretically highly challenging and virtually nonexistent. Here, we introduce a zeroth-order wavefunction and energy, which represent the first step toward the development of an intramolecular variant of the SAPT formalism. The proposed energy expression is based on the Chemical Hamiltonian Approach (CHA), which relies upon an asymmetric interpretation of the electronic integrals. The orbitals are optimized with a non-hermitian Fock matrix based on two variants: one using orbitals strictly localized on individual fragments and the other using canonical (delocalized) orbitals. The zeroth-order wavefunction and energy expression are validated on a series of prototypical systems. The computed intramolecular interaction energies demonstrate that our approach combining the CHA with strictly localized orbitals achieves reasonable interaction energies and basis set dependence in addition to producing intuitive energy trends. Our zeroth-order wavefunction is the primary step fundamental to the derivation of any perturbation theory correction, which has the potential to truly transform our understanding and quantification of non
Resource Management for Real-Time Adaptive Agents
NASA Technical Reports Server (NTRS)
Welch, Lonnie; Chelberg, David; Pfarr, Barbara; Fleeman, David; Parrott, David; Tan, Zhen-Yu; Jain, Shikha; Drews, Frank; Bruggeman, Carl; Shuler, Chris
2003-01-01
Increased autonomy and automation in onboard flight systems offer numerous potential benefits, including cost reduction and greater flexibility. The existence of generic mechanisms for automation is critical for handling unanticipated science events and anomalies where limitations in traditional control software with fixed, predetermined algorithms can mean loss of science data and missed opportunities for observing important terrestrial events. We have developed such a mechanism by adding a Hierarchical Agent-based ReaLTime technology (HART) extension to our Dynamic Resource Management (DRM) middleware. Traditional DRM provides mechanisms to monitor the realtime performance of distributed applications and to move applications among processors to improve real-time performance. In the HART project we have designed and implemented a performance adaptation mechanism to improve reaktime performance. To use this mechanism, applications are developed that can run at various levels of quality. The DRM can choose a setting for the quality level of an application dynamically at run-time in order to manage satellite resource usage more effectively. A groundbased prototype of a satellite system that captures and processes images has also been developed as part of this project to be used as a benchmark for evaluating the resource management framework A significant enhancement of this generic mission-independent framework allows scientists to specify the utility, or "scientific benefit," of science observations under various conditions like cloud cover and compression method. The resource manager then uses these benefit tables to determine in redtime how to set the quality levels for applications to maximize overall system utility as defined by the scientists running the mission. We also show how maintenance functions llke health and safety data can be integrated into the utility framework. Once thls framework has been certified for missions and successfully flight tested it
Sensitivity of The High-resolution Wam Model With Respect To Time Step
NASA Astrophysics Data System (ADS)
Kasemets, K.; Soomere, T.
The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave
Timing paradox of stepping and falls in ageing: not so quick and quick(er) on the trigger.
Rogers, Mark W; Mille, Marie-Laure
2016-08-15
Physiological and degenerative changes affecting human standing balance are major contributors to falls with ageing. During imbalance, stepping is a powerful protective action for preserving balance that may be voluntarily initiated in recognition of a balance threat, or be induced by an externally imposed mechanical or sensory perturbation. Paradoxically, with ageing and falls, initiation slowing of voluntary stepping is observed together with perturbation-induced steps that are triggered as fast as or faster than for younger adults. While age-associated changes in sensorimotor conduction, central neuronal processing and cognitive functions are linked to delayed voluntary stepping, alterations in the coupling of posture and locomotion may also prolong step triggering. It is less clear, however, how these factors may explain the accelerated triggering of induced stepping. We present a conceptual model that addresses this issue. For voluntary stepping, a disruption in the normal coupling between posture and locomotion may underlie step-triggering delays through suppression of the locomotion network based on an estimation of the evolving mechanical state conditions for stability. During induced stepping, accelerated step initiation may represent an event-triggering process whereby stepping is released according to the occurrence of a perturbation rather than to the specific sensorimotor information reflecting the evolving instability. In this case, errors in the parametric control of induced stepping and its effectiveness in stabilizing balance would be likely to occur. We further suggest that there is a residual adaptive capacity with ageing that could be exploited to improve paradoxical triggering and other changes in protective stepping to impact fall risk. PMID:26915664
Large time-step stability of explicit one-dimensional advection schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.
Space-time variability of floods across Germany: Gradual trends, step changes and fluctuations
NASA Astrophysics Data System (ADS)
Merz, Bruno; Vorogushyn, Sergiy; Viet Dung, Nguyen; Schröter, Kai
2015-04-01
The space-time variability of flood magnitude and frequency across Germany at the interannual and decadal time scale is analyzed and interpreted. The analyses are based on flood time series of 68 catchments for a joint period of 74 years. The catchments are distributed across Germany and show different flood regimes. Different statistical tests are applied to investigate different types of flood changes: gradual trends, step changes and fluctuations. In addition, changes in the mean behavior and in the variability are studied. A focus is placed on the spatial stability of changes, i.e. answering the question to which extent flood changes are coherent across Germany. The joint analysis of changes for a large number of catchments allows interpreting the causes of the observed changes. For instance, climate-related flood changes are expected to show a different behavior than changes caused by river training or land-use change.
NASA Astrophysics Data System (ADS)
Sheridan, J. A.; Bloom, D. M.; Solomon, P. M.
1995-03-01
We have built a system capable of measuring the step response of III-V electronic devices on the picosecond time scale, with no alteration in device design or epitaxy. To switch on the device under test (DUT), we have designed and fabricated a new type of photoconductor, the recessed-ohmic photoconductor, which swings 0.45 V with a 2-ps rise time and maintains constant output voltage for 100 ps. This switch is monolithically integrated with the DUT. To measure the output current of the DUT, we have built a Ti:sapphire-laser-based pump-probe direct electro-optic sampling system that has a minimum detectable voltage of 70 mu V / \\radical Hz \\end-radical and a measurement bandwidth of 750 GHz. The overall system, comprised of the recessed ohmic photoconductor and the electro-optic sampling system, can be used to measure the step response of III-V electronic devices on the picosecond time scale.
A class of large time step Godunov schemes for hyperbolic conservation laws and applications
NASA Astrophysics Data System (ADS)
Qian, ZhanSen; Lee, Chun-Hian
2011-08-01
A large time step (LTS) Godunov scheme firstly proposed by LeVeque is further developed in the present work and applied to Euler equations. Based on the analysis of the computational performances of LeVeque's linear approximation on wave interactions, a multi-wave approximation on rarefaction fan is proposed to avoid the occurrences of rarefaction shocks in computations. The developed LTS scheme is validated using 1-D test cases, manifesting high resolution for discontinuities and the capability of maintaining computational stability when large CFL numbers are imposed. The scheme is then extended to multidimensional problems using dimensional splitting technique; the treatment of boundary condition for this multidimensional LTS scheme is also proposed. As for demonstration problems, inviscid flows over NACA0012 airfoil and ONERA M6 wing with given swept angle are simulated using the developed LTS scheme. The numerical results reveal the high resolution nature of the scheme, where the shock can be captured within 1-2 grid points. The resolution of the scheme would improve gradually along with the increasing of CFL number under an upper bound where the solution becomes severely oscillating across the shock. Computational efficiency comparisons show that the developed scheme is capable of reducing the computational time effectively with increasing the time step (CFL number).
A multistage time-stepping scheme for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1985-01-01
A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.
Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential
Zhang Ying; Liang Haozhao; Meng Jie
2009-08-26
The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus {sup 12}C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.
A simple method for improving the time-stepping accuracy in atmosphere and ocean models
NASA Astrophysics Data System (ADS)
Williams, P. D.
2012-12-01
In contemporary numerical simulations of the atmosphere and ocean, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. A common time-stepping method in atmosphere and ocean models is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter, which has become known as the RAW filter (Williams 2009, 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other atmosphere and ocean models. References PD Williams (2009) A
NASA Astrophysics Data System (ADS)
Ho, C. Y.; Leung, R. C. K.; Zhou, K.; Lam, G. C. Y.; Jiang, Z.
2011-09-01
One-step direct aeroacoustic simulation (DAS) has received attention from aerospace and mechanical high-pressure fluid-moving system manufacturers for quite some time. They aim to simulate the unsteady flow and acoustic field in the duct simultaneously in order to investigate the aeroacoustic generation mechanisms. Because of the large length and energy scale disparities between the acoustic far field and the aerodynamic near field, highly accurate and high-resolution simulation scheme is required. This involves the use of high order compact finite difference and time advancement schemes in simulation. However, in this situation, large buffer zones are always needed to suppress the spurious numerical waves emanating from computational boundaries. This further increases the computational resources to yield accurate results. On the other hand, for such problem as supersonic jet noise, the numerical scheme should be able to resolve both strong shock waves and weak acoustic waves simultaneously. Usually numerical aeroa-coustic scheme that is good for low Mach number flow is not able to give satisfactory simulation results for shock wave. Therefore, the aeroacoustic research community has been looking for a more efficient one-step DAS scheme that has the comparable accuracy to the finite-difference approach with smaller buffer regions, yet is able to give accurate solutions from subsonic to supersonic flows. The conservation element and solution element (CE/SE) scheme is one of the possible schemes satisfying the above requirements. This paper aims to report the development of a CE/SE scheme for one-step DAS and illustrate its robustness and effectiveness with two selected benchmark problems.
Detection of Zika virus by SYBR green one-step real-time RT-PCR.
Xu, Ming-Yue; Liu, Si-Qing; Deng, Cheng-Lin; Zhang, Qiu-Yan; Zhang, Bo
2016-10-01
The ongoing Zika virus (ZIKV) outbreak has rapidly spread to new areas of Americas, which were the first transmissions outside its traditional endemic areas in Africa and Asia. Due to the link with newborn defects and neurological disorder, numerous infected cases throughout the world and various mosquito vectors, the virus has been considered to be an international public health emergency. In the present study, we developed a SYBR Green based one-step real-time RT-PCR assay for rapid detection of ZIKV. Our results revealed that the real-time assay is highly specific and sensitive in detection of ZIKV in cell samples. Importantly, the replication of ZIKV at different time points in infected cells could be rapidly monitored by the real-time RT-PCR assay. Specifically, the real-time RT-PCR showed acceptable performance in measurement of infectious ZIKV RNA. This assay could detect ZIKV at a titer as low as 1PFU/mL. The real-time RT-PCR assay could be a useful tool for further virology surveillance and diagnosis of ZIKV. PMID:27444120
Multiple ``time step'' Monte Carlo simulations: Application to charged systems with Ewald summation
NASA Astrophysics Data System (ADS)
Bernacki, Katarzyna; Hetényi, Balázs; Berne, B. J.
2004-07-01
Recently, we have proposed an efficient scheme for Monte Carlo simulations, the multiple "time step" Monte Carlo (MTS-MC) [J. Chem. Phys. 117, 8203 (2002)] based on the separation of the potential interactions into two additive parts. In this paper, the structural and thermodynamic properties of the simple point charge water model combined with the Ewald sum are compared for the MTS-MC real-/reciprocal-space split of the Ewald summation and the common Metropolis Monte Carlo method. We report a number of observables as a function of CPU time calculated using MC and MTS-MC. The correlation functions indicate that speedups on the order of 4.5-7.5 can be obtained for systems of 108-500 waters for n=10 splitting parameter.
The multiple time step r-RESPA procedure and polarizable potentials based on induced dipole moments
NASA Astrophysics Data System (ADS)
Masella, Michel
In the present study, we present an accelerating scheme based on the reversible multiple time step r-RESPA method to be used in molecular dynamics simulations with polarizable potentials based on induced dipole moments. Even if the induced dipoles are estimated with an iterative self-consistent procedure, this scheme significantly reduces the CPU time needed to perform a molecular dynamics simulation, up to a factor 2, as compared to the Car-Parrinello method where additional dynamical variables are introduced for the treatment of the induced dipoles. The tests show that stable and reliable molecular dynamics trajectories can be generated with that scheme, and that the physical properties derived from the trajectories are equivalent to those computed with the classical all atom iterative approach and the Car-Parrinello one.
An Efficient Time-Stepping Scheme for Ab Initio Molecular Dynamics Simulations
NASA Astrophysics Data System (ADS)
Tsuchida, Eiji
2016-08-01
In ab initio molecular dynamics simulations of real-world problems, the simple Verlet method is still widely used for integrating the equations of motion, while more efficient algorithms are routinely used in classical molecular dynamics. We show that if the Verlet method is used in conjunction with pre- and postprocessing, the accuracy of the time integration is significantly improved with only a small computational overhead. We also propose several extensions of the algorithm required for use in ab initio molecular dynamics. The validity of the processed Verlet method is demonstrated in several examples including ab initio molecular dynamics simulations of liquid water. The structural properties obtained from the processed Verlet method are found to be sufficiently accurate even for large time steps close to the stability limit. This approach results in a 2× performance gain over the standard Verlet method for a given accuracy. We also show how to generate a canonical ensemble within this approach.
Adaptive, real-time hypoxia measurements using an autonomous boat
NASA Astrophysics Data System (ADS)
Kerkez, B.; Wong, B. P.; Balzano, L.; Lipor, J.; Scavia, D.
2015-12-01
We present an autonomous system to measure hypoxia at high spatial resolutions. The approach combines a robotic boat, cloud hosted data services, and a suite of adaptive sampling algorithms to minimize the number of samples required to delineate hypoxic extents. The boat lowers sensors into the water column to provide depth profiles of temperature and oxygen concentrations. An adaptive path-planning algorithm continuously analyzes the in-situ observations and directs the boat to its next measurement location. This significantly reduces number of samples compared to a gridded sampling approach, while simultaneously improving the certainty with which the hypoxic regions are delineated. The method has been evaluated on small lakes throughout Michigan and shows significant promise to scale to the Great Lakes, where hypoxia is common occurrence that adversely affects various stakeholder and ecosystems.
PFC design via FRIT Approach for Adaptive Output Feedback Control of Discrete-time Systems
NASA Astrophysics Data System (ADS)
Mizumoto, Ikuro; Takagi, Taro; Fukui, Sota; Shah, Sirish L.
This paper deals with a design problem of an adaptive output feedback control for discrete-time systems with a parallel feedforward compensator (PFC) which is designed for making the augmented controlled system ASPR. A PFC design scheme by a FRIT approach with only using an input/output experimental data set will be proposed for discrete-time systems in order to design an adaptive output feedback control system. Furthermore, the effectiveness of the proposed PFC design method will be confirmed through numerical simulations by designing adaptive control system with adaptive NN (Neural Network) for an uncertain discrete-time system.
Adaptation-Induced Compression of Event Time Occurs Only for Translational Motion
Fornaciai, Michele; Arrighi, Roberto; Burr, David C.
2016-01-01
Adaptation to fast motion reduces the perceived duration of stimuli displayed at the same location as the adapting stimuli. Here we show that the adaptation-induced compression of time is specific for translational motion. Adaptation to complex motion, either circular or radial, did not affect perceived duration of subsequently viewed stimuli. Adaptation with multiple patches of translating motion caused compression of duration only when the motion of all patches was in the same direction. These results show that adaptation-induced compression of event-time occurs only for uni-directional translational motion, ruling out the possibility that the neural mechanisms of the adaptation occur at early levels of visual processing. PMID:27003445
NASA Astrophysics Data System (ADS)
Gupta, Shubhangi; Wohlmuth, Barbara; Helmig, Rainer
2016-05-01
We present an extrapolation-based semi-implicit multi-rate time stepping (MRT) scheme and a compound-fast MRT scheme for a naturally partitioned, multi-time-scale hydro-geomechanical hydrate reservoir model. We evaluate the performance of the two MRT methods compared to an iteratively coupled solution scheme and discuss their advantages and disadvantages. The performance of the two MRT methods is evaluated in terms of speed-up and accuracy by comparison to an iteratively coupled solution scheme. We observe that the extrapolation-based semi-implicit method gives a higher speed-up but is strongly dependent on the relative time scales of the latent (slow) and active (fast) components. On the other hand, the compound-fast method is more robust and less sensitive to the relative time scales, but gives lower speed up as compared to the semi-implicit method, especially when the relative time scales of the active and latent components are comparable.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
The two-step shape and timing of the last deglaciation in Antarctica
Jouzel, J.; Petit, J.R. |; Duclos, Y.
1995-04-01
The two-step character of the last deglaciation is well recognized in Western Europe, in Greenland and in the North Atlantic. For example, in Greenland, a gradual temperature decrease started at the Boelling (B) around 14.5 ky BP, spanned through the Alleroed (A) and was followed by the cold Younger Dryas (YD) event which terminated abruptly around 11.5 ky BP. Recent results suggest that this BA/YD sequence may have extended throughout all the Northern Hemisphere but the evidence of a late transition cooling is still poor for the Southern Hemisphere. Here we present a detailed isotopic record analyzed in a new ice core drilled at Dome B in East Antarctica that fully demonstrates the existence of an Antarctic cold reversal (ACR). These results suggest that the two-step shape of the last deglaciation has a worldwide character but they also point to noticeable interhemispheric differences. Thus. the coldest part of the ACR. which shows a temperature drop about three times weaker than that recorded during the YD in Greenland, may have preceded the YD. Antarctica did not experienced abrupt changes and the two warming periods started there before they started in Greenland. The links between Southern and Northern Hemisphere climates throughout this period are discussed in the light of additional information derived from the Antarctic dust record. 87 refs., 5 figs.
[Photodissociation of Acetylene and Acetone using Step-Scan Time-Resolved FTIR Emission Spectroscopy
NASA Technical Reports Server (NTRS)
McLaren, Ian A.; Wrobel, Jacek D.
1997-01-01
The photodissociation of acetylene and acetone was investigated as a function of added quenching gas pressures using step-scan time-resolved FTIR emission spectroscopy. Its main components consist of Bruker IFS88, step-scan Fourier Transform Infrared (FTIR) spectrometer coupled to a flow cell equipped with Welsh collection optics. Vibrationally excited C2H radicals were produced from the photodissociation of acetylene in the unfocused experiments. The infrared (IR) emission from these excited C2H radicals was investigated as a function of added argon pressure. Argon quenching rate constants for all C2H emission bands are of the order of 10(exp -13)cc/molecule.sec. Quenching of these radicals by acetylene is efficient, with a rate constant in the range of 10(exp -11) cc/molecule.sec. The relative intensity of the different C2H emission bands did not change with the increasing argon or acetylene pressure. However, the overall IR emission intensity decreased, for example, by more than 50% when the argon partial pressure was raised from 0.2 to 2 Torr at fixed precursor pressure of 160mTorr. These observations provide evidence for the formation of a metastable C2H2 species, which are collisionally quenched by argon or acetylene. Problems encountered in the course of the experimental work are also described.
Structural damage evolution assessment using the regularised time step integration method
NASA Astrophysics Data System (ADS)
Chen, Hua-Peng; Maung, Than Soe
2014-09-01
This paper presents an approach to identify both the location and severity evolution of damage in engineering structures directly from measured dynamic response data. A relationship between the change in structural parameters such as stiffness caused by structural damage development and the measured dynamic response data such as accelerations is proposed, on the basis of the governing equations of motion for the original and damaged structural systems. Structural damage parameters associated with time are properly chosen to reflect both the location and severity development over time of damage in a structure. Basic equations are provided to solve the chosen time-dependent damage parameters, which are constructed by using the Newmark time step integration method without requiring a modal analysis procedure. The Tikhonov regularisation method incorporating the L-curve criterion for determining the regularisation parameter is then employed to reduce the influence of measurement errors in dynamic response data and then to produce stable solutions for structural damage parameters. Results for two numerical examples with various simulated damage scenarios show that the proposed method can accurately identify the locations of structural damage and correctly assess the evolution of damage severity from information on vibration measurements with uncertainties.
NASA Astrophysics Data System (ADS)
Roland, Teboh; Mavroidis, Panayiotis; Shi, Chengyu; Papanikolaou, Nikos
2010-05-01
System latency introduces geometric errors in the course of real-time target tracking radiotherapy. This effect can be minimized, for example by the use of predictive filters, but cannot be completely avoided. In this work, we present a convolution technique that can incorporate the effect as part of the treatment planning process. The method can be applied independently or in conjunction with the predictive filters to compensate for residual latency effects. The implementation was performed on TrackBeam (Initia Ltd, Israel), a prototype real-time target tracking system assembled and evaluated at our Cancer Institute. For the experimental system settings examined, a Gaussian distribution attributable to the TrackBeam latency was derived with σ = 3.7 mm. The TrackBeam latency, expressed as an average response time, was deduced to be 172 ms. Phantom investigations were further performed to verify the convolution technique. In addition, patient studies involving 4DCT volumes of previously treated lung cancer patients were performed to incorporate the latency effect in the dose prediction step. This also enabled us to effectively quantify the dosimetric and radiobiological impact of the TrackBeam and other higher latency effects on the clinical outcome of a real-time target tracking delivery.
NASA Astrophysics Data System (ADS)
Murthi, A.; Menon, S.; Sednev, I.
2011-12-01
An inherent difficulty in the ability of global climate models to accurately simulate precipitation lies in the use of a large time step, Δt (usually 30 minutes), to solve the governing equations. Since microphysical processes are characterized by small time scales compared to Δt, finite difference approximations used to advance microphysics equations suffer from numerical instability and large time truncation errors. With this in mind, the sensitivity of precipitation simulated by the atmospheric component of CESM, namely the Community Atmosphere Model (CAM 5.1), to the microphysics time step (τ) is investigated. Model integrations are carried out for a period of five years with a spin up time of about six months for a horizontal resolution of 2.5 × 1.9 degrees and 30 levels in the vertical, with Δt = 1800 s. The control simulation with τ = 900 s is compared with one using τ = 300 s for accumulated precipitation and radi- ation budgets at the surface and top of the atmosphere (TOA), while keeping Δt fixed. Our choice of τ = 300 s is motivated by previous work on warm rain processes wherein it was shown that a value of τ around 300 s was necessary, but not sufficient, to ensure positive definiteness and numerical stability of the explicit time integration scheme used to integrate the microphysical equations. However, since the entire suite of microphysical processes are represented in our case, we suspect that this might impose additional restrictions on τ. The τ = 300 s case produces differences in large-scale accumulated rainfall from the τ = 900 s case by as large as 200 mm, over certain regions of the globe. The spatial patterns of total accumulated precipitation using τ = 300 s are in closer agreement with satellite observed precipitation, when compared to the τ = 900 s case. Differences are also seen in the radiation budget with the τ = 300 (900) s cases producing surpluses that range between 1-3 W/m2 at both the TOA and surface in the global
Improved tomographic reconstructions using adaptive time-dependent intensity normalization
Titarenko, Valeriy; Titarenko, Sofya; Withers, Philip J.; De Carlo, Francesco; Xiao, Xianghui
2010-01-01
The first processing step in synchrotron-based micro-tomography is the normalization of the projection images against the background, also referred to as a white field. Owing to time-dependent variations in illumination and defects in detection sensitivity, the white field is different from the projection background. In this case standard normalization methods introduce ring and wave artefacts into the resulting three-dimensional reconstruction. In this paper the authors propose a new adaptive technique accounting for these variations and allowing one to obtain cleaner normalized data and to suppress ring and wave artefacts. The background is modelled by the product of two time-dependent terms representing the illumination and detection stages. These terms are written as unknown functions, one scaled and shifted along a fixed direction (describing the illumination term) and one translated by an unknown two-dimensional vector (describing the detection term). The proposed method is applied to two sets (a stem Salix variegata and a zebrafish Danio rerio) acquired at the parallel beam of the micro-tomography station 2-BM at the Advanced Photon Source showing significant reductions in both ring and wave artefacts. In principle the method could be used to correct for time-dependent phenomena that affect other tomographic imaging geometries such as cone beam laboratory X-ray computed tomography. PMID:20724791
Detection and Correction of Step Discontinuities in Kepler Flux Time Series
NASA Technical Reports Server (NTRS)
Kolodziejczak, J. J.; Morris, R. L.
2011-01-01
PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].
Electric and hybrid electric vehicle study utilizing a time-stepping simulation
NASA Technical Reports Server (NTRS)
Schreiber, Jeffrey G.; Shaltens, Richard K.; Beremand, Donald G.
1992-01-01
The applicability of NASA's advanced power technologies to electric and hybrid vehicles was assessed using a time-stepping computer simulation to model electric and hybrid vehicles operating over the Federal Urban Driving Schedule (FUDS). Both the energy and power demands of the FUDS were taken into account and vehicle economy, range, and performance were addressed simultaneously. Results indicate that a hybrid electric vehicle (HEV) configured with a flywheel buffer energy storage device and a free-piston Stirling convertor fulfills the emissions, fuel economy, range, and performance requirements that would make it acceptable to the consumer. It is noted that an assessment to determine which of the candidate technologies are suited for the HEV application has yet to be made. A proper assessment should take into account the fuel economy and range, along with the driveability and total emissions produced.
Comparison of Fixed and Variable Time Step Trajectory Integration Methods for Cislunar Trajectories
NASA Technical Reports Server (NTRS)
Weeks, ichael W.; Thrasher, Stephen W.
2007-01-01
Due to the nonlinear nature of the Earth-Moon-Sun three-body problem and non-spherical gravity, CEV cislunar targeting algorithms will require many propagations in their search for a desired trajectory. For on-board targeting especially, the algorithm must have a simple, fast, and accurate propagator to calculate a trajectory with reasonable computation time, and still be robust enough to remain stable in the various flight regimes that the CEV will experience. This paper compares Cowell s method with a fourth-order Runge- Kutta integrator (RK4), Encke s method with a fourth-order Runge-Kutta- Nystr m integrator (RKN4), and a method known as Multi-Conic. Additionally, the study includes the Bond-Gottlieb 14-element method (BG14) and extends the investigation of Encke-Nystrom methods to integrators of higher order and with variable step size.
Nikzad, Nasim; Sahari, Mohammad A; Vanak, Zahra Piravi; Safafar, Hamed; Boland-nazar, Seyed A
2013-08-01
Weight, oil, fatty acids, tocopherol, polyphenol, and sterol properties of 5 olive cultivars (Zard, Fishomi, Ascolana, Amigdalolia, and Conservalia) during crude, lye treatment, washing, fermentation, and pasteurization steps were studied. Results showed: oil percent was higher and lower in Ascolana (crude step) and in Fishomi (pasteurization step), respectively; during processing steps, in all cultivars, oleic, palmitic, linoleic, and stearic acids were higher; the highest changes in saturated and unsaturated fatty acids were in fermentation step; the highest and the lowest ratios of ω3 / ω6 were in Ascolana (washing step) and in Zard (pasteurization step), respectively; the highest and the lowest tocopherol were in Amigdalolia and Fishomi, respectively, and major damage occurred in lye step; the highest and the lowest polyphenols were in Ascolana (crude step) and in Zard and Ascolana (pasteurization step), respectively; the major damage among cultivars occurred during lye step, in which the polyphenol reduced to 1/10 of first content; sterol did not undergo changes during steps. Reviewing of olive patents shows that many compositions of fruits such as oil quality, fatty acids, quantity and its fraction can be changed by alteration in cultivar and process. PMID:23688142
Adaptive time-delayed stabilization of steady states and periodic orbits.
Selivanov, Anton; Lehnert, Judith; Fradkov, Alexander; Schöll, Eckehard
2015-01-01
We derive adaptive time-delayed feedback controllers that stabilize fixed points and periodic orbits. First, we develop an adaptive controller for stabilization of a steady state by applying the speed-gradient method to an appropriate goal function and prove global asymptotic stability of the resulting system. For an example we show that the advantage of the adaptive controller over the nonadaptive one is in a smaller controller gain. Second, we propose adaptive time-delayed algorithms for stabilization of periodic orbits. Their efficiency is confirmed by local stability analysis. Numerical examples demonstrate the applicability of the proposed controllers. PMID:25679681