Retroactive Adjustment of Perceived Time
ERIC Educational Resources Information Center
Patel, Minal; Chait, Maria
2011-01-01
Accurately timing acoustic events in dynamic scenes is fundamental to scene analysis. To detect events in busy scenes, listeners must often identify a change in the "pattern" of ongoing fluctuation, resulting in many ubiquitous events being detected later than when they occurred. This raises the question of how delayed detection time affects the…
Postural adjustment errors during lateral step initiation in older and younger adults
Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.
2016-01-01
The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25595953
Grief: Difficult Times, Simple Steps.
ERIC Educational Resources Information Center
Waszak, Emily Lane
This guide presents techniques to assist others in coping with the loss of a loved one. Using the language of 9 layperson, the book contains more than 100 tips for caregivers or loved ones. A simple step is presented on each page, followed by reasons and instructions for each step. Chapters include: "What to Say"; "Helpful Things to Do"; "Dealing…
Risk-adjusted monitoring of survival times
Sego, Landon H.; Reynolds, Marion R.; Woodall, William H.
2009-02-26
We consider the monitoring of clinical outcomes, where each patient has a di®erent risk of death prior to undergoing a health care procedure.We propose a risk-adjusted survival time CUSUM chart (RAST CUSUM) for monitoring clinical outcomes where the primary endpoint is a continuous, time-to-event variable that may be right censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart to the risk-adjusted Bernoulli CUSUM chart, using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is more efficient at detecting a sudden decrease in the odds of death than the risk-adjusted Bernoulli CUSUM chart, especially when the fraction of censored observations is not too high. We also discuss the implementation of a prospective monitoring scheme using the RAST CUSUM chart.
Preparatory state and postural adjustment strategies for choice reaction step initiation.
Watanabe, Tatsunori; Ishida, Kazuto; Tanabe, Shigeo; Nojima, Ippei
2016-09-22
A loud auditory stimulus (LAS) presented simultaneously with a visual imperative stimulus can reduce reaction time (RT) by automatically triggering a movement prepared in the brain and has been used to investigate a movement preparation. It is still under debate whether or not a response is prepared in advance in RT tasks involving choice responses. The purpose of the present study was to investigate the preparatory state of anticipatory postural adjustments (APAs) during a choice reaction step initiation. Thirteen young adults were asked to step forward in response to a visual imperative stimulus in two choice stepping conditions: (i) the responding side is not known and must be selected and (ii) the responding side is known but whether to initiate or inhibit a step response must be selected. LAS was presented randomly and simultaneously with the visual imperative stimulus. LAS significantly increased the occurrence rates of inappropriately initiated APAs while reducing the RTs of correct and incorrect trials in both task conditions, demonstrating that LAS triggered the prepared APA automatically. This observation suggests that APAs are prepared in advance and withheld from release until the appropriate timing during a choice reaction step initiation. The preparatory activity of APAs might be modulated by the inhibitory activity required by the choice tasks. The preparation strategy may be chosen for fast responses and is judged most suitable to comply with the tasks because inappropriately initiated APAs can be corrected without making complete stepping errors. PMID:27393247
Simulating system dynamics with arbitrary time step
NASA Astrophysics Data System (ADS)
Kantorovich, L.
2007-02-01
We suggest a dynamic simulation method that allows efficient and realistic modeling of kinetic processes, such as atomic diffusion, in which time has its actual meaning. Our method is similar in spirit to widely used kinetic Monte Carlo (KMC) techniques; however, in our approach, the time step can be chosen arbitrarily. This has an advantage in some cases, e.g., when the transition rates change with time sufficiently fast over the period of the KMC time step (e.g., due to time dependence of some external factors influencing kinetics such as moving scanning probe microscopy tip or external time-dependent field) or when the clock time is set by some external conditions, and it is convenient to use equal time steps instead of the random choice of the KMC algorithm in order to build up probability distribution functions. We show that an arbitrary choice of the time step can be afforded by building up the complete list of events including the “residence site” and multihop transitions. The idea of the method is illustrated in a simple “toy” model of a finite one-dimensional lattice of potential wells with unequal jump rates to either side, which can be studied analytically. We show that for any choice of the time step, our general kinetics method reproduces exactly the solution of the corresponding master equations for any choice of the time steps. The final kinetics also matches the standard KMC, and this allows better understanding of this algorithm, in which the time step is chosen in a certain way and the system always advances by a single hop.
Delval, A; Dujardin, K; Tard, C; Devanne, H; Willart, S; Bourriez, J-L; Derambure, P; Defebvre, L
2012-09-01
Step initiation is associated with anticipatory postural adjustments (APAs) that vary according to the speed of the first step. When step initiation is elicited by a "go" signal (i.e. in a reaction time task), the presentation of an unpredictable, intense, acoustic startling stimulus (engaging a subcortical mechanism) simultaneously with or just before the imperative "go" signal is able to trigger early-phase APAs. The aim of the present study was to better understand the mechanisms underlying APAs during step initiation. We hypothesized that the early release of APAs by low-intensity, non-startling stimuli delivered long before an imperative "go" signal indicates the involvement of several different mechanisms in triggering APAs (and not just acoustic reflexes triggering brainstem structures). Fifteen healthy subjects were asked to respond to an imperative visual "go" signal by initiating a step with their right leg. A brief, binaural 40, 80 or 115 dB auditory stimulus was given 1.4 s before the "go" signal. Participants were instructed not to respond to the auditory stimulus. The centre of pressure trajectory and the electromyographic activity of the orbicularis oculi, sternocleidomastoid and tibialis anterior muscles were recorded. All three intensities of the auditory stimulus were able to evoke low-amplitude, short APAs without subsequent step execution. The louder the stimulus, the more frequent the elicitation. Depending on the intensity of the stimulus, APAs prior to step initiation can be triggered without the evocation of a startle response or an acoustic blink. Greater reaction times for these APAs were observed for non-startling stimuli. This observation suggested the involvement of pathways that did not involve the brainstem as a "prime mover". PMID:22626643
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this report is to describe (a) the basis for and implementation of a data processing step called salt adjustment that was performed on designated foods in USDA dietary intake surveys from 1985 through 2008, (b) the rationale for discontinuing the step, and (c) the impact and implica...
An automatic step adjustment method for average power analysis technique used in fiber amplifiers
NASA Astrophysics Data System (ADS)
Liu, Xue-Ming
2006-04-01
An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.
Extrapolated implicit-explicit time stepping.
Constantinescu, E. M.; Sandu, A.; Mathematics and Computer Science; Virginia Polytechnic Inst. and State Univ.
2010-01-01
This paper constructs extrapolated implicit-explicit time stepping methods that allow one to efficiently solve problems with both stiff and nonstiff components. The proposed methods are based on Euler steps and can provide very high order discretizations of ODEs, index-1 DAEs, and PDEs in the method-of-lines framework. Implicit-explicit schemes based on extrapolation are simple to construct, easy to implement, and straightforward to parallelize. This work establishes the existence of perturbed asymptotic expansions of global errors, explains the convergence orders of these methods, and studies their linear stability properties. Numerical results with stiff ODE, DAE, and PDE test problems confirm the theoretical findings and illustrate the potential of these methods to solve multiphysics multiscale problems.
Simulating stochastic dynamics using large time steps.
Corradini, O; Faccioli, P; Orland, H
2009-12-01
We present an approach to investigate the long-time stochastic dynamics of multidimensional classical systems, in contact with a heat bath. When the potential energy landscape is rugged, the kinetics displays a decoupling of short- and long-time scales and both molecular dynamics or Monte Carlo (MC) simulations are generally inefficient. Using a field theoretic approach, we perform analytically the average over the short-time stochastic fluctuations. This way, we obtain an effective theory, which generates the same long-time dynamics of the original theory, but has a lower time-resolution power. Such an approach is used to develop an improved version of the MC algorithm, which is particularly suitable to investigate the dynamics of rare conformational transitions. In the specific case of molecular systems at room temperature, we show that elementary integration time steps used to simulate the effective theory can be chosen a factor approximately 100 larger than those used in the original theory. Our results are illustrated and tested on a simple system, characterized by a rugged energy landscape. PMID:20365123
Time to pause before the next step
Siemon, R.E.
1998-12-31
Many scientists, who have staunchly supported ITER for years, are coming to realize it is time to further rethink fusion energy`s development strategy. Specifically, as was suggested by Grant Logan and Dale Meade, and in keeping with the restructuring of 1996, a theme of better, cheaper, faster fusion would serve the program more effectively than ``demonstrating controlled ignition...and integrated testing of the high-heat-flux and nuclear components required to utilize fusion energy...`` which are the important ingredients of ITER`s objectives. The author has personally shifted his view for a mixture of technical and political reasons. On the technical side, he senses that through advanced tokamak research, spherical tokamak research, and advanced stellarator work, scientists are coming to a new understanding that might make a burning-plasma device significantly smaller and less expensive. Thus waiting for a few years, even ten years, seems prudent. Scientifically, there is fascinating physics to be learned through studies of burning plasma on a tokamak. And clearly if one wishes to study burning plasma physics in a sustained plasma, there is no other configuration with an adequate database on which to proceed. But what is the urgency of moving towards an ITER-like step focused on burning plasma? Some of the arguments put forward and the counter arguments are discussed here.
Seven Steps to On-Time Delivery.
ERIC Educational Resources Information Center
Konchar, Mark; Sanvido, Victor
1999-01-01
Describes seven steps to consider when making project-delivery decisions that include defining the school district's goals and profile, selecting the project-delivery system and procurement method, selecting the project team and contract type, and developing and confirming the facility program. Concluding comments address the district review of…
Insect-computer hybrid legged robot with user-adjustable speed, step length and walking gait.
Cao, Feng; Zhang, Chao; Choo, Hao Yu; Sato, Hirotaka
2016-03-01
We have constructed an insect-computer hybrid legged robot using a living beetle (Mecynorrhina torquata; Coleoptera). The protraction/retraction and levation/depression motions in both forelegs of the beetle were elicited by electrically stimulating eight corresponding leg muscles via eight pairs of implanted electrodes. To perform a defined walking gait (e.g., gallop), different muscles were individually stimulated in a predefined sequence using a microcontroller. Different walking gaits were performed by reordering the applied stimulation signals (i.e., applying different sequences). By varying the duration of the stimulation sequences, we successfully controlled the step frequency and hence the beetle's walking speed. To the best of our knowledge, this paper presents the first demonstration of living insect locomotion control with a user-adjustable walking gait, step length and walking speed. PMID:27030043
Age effects on the control of dynamic balance during step adjustments under temporal constraints.
Nakano, Wataru; Fukaya, Takashi; Kobayashi, Satomi; Ohashi, Yukari
2016-06-01
This study investigated the age effects on the control of dynamic balance during step adjustments under temporal constraints. Fifteen young adults and 14 older adults avoided a virtual white planar obstacle by lengthening or shortening their steps under free or constrained conditions. In the anterior-posterior direction, older adults demonstrated significantly decreased center of mass velocity at the swing foot contact under temporal constraints. Additionally, the distances between the 'extrapolated center of mass' position and base of support at the swing foot contact were greater in older adults than young adults. In the mediolateral direction, center of mass displacement was significantly increased in older adults compared with young adults. Consequently, older adults showed a significantly increased step width at the swing foot contact in the constraint condition. Overall, these data suggest that older adults demonstrate a conservative strategy to maintain anterior-posterior stability. By contrast, although older adults are able to modulate their step width to maintain mediolateral dynamic balance, age-related changes in mediolateral balance control under temporal constraints may increase the risk of falls in the lateral direction during obstacle negotiation. PMID:26852293
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
Time scaling relations for step bunches from models with step-step attractions (B1-type models)
NASA Astrophysics Data System (ADS)
Krasteva, A.; Popova, H.; Akutsu, N.; Tonchev, V.
2016-03-01
The step bunching instability is studied in three models of step motion defined in terms of ordinary differential equations (ODE). The source of instability in these models is step-step attraction, it is opposed by step-step repulsion and the developing surface patterns reflect the balance between the two. The first model, TE2, is a generalization of the seminal model of Tersoff et al. (1995). The second one, LW2, is obtained from the model of Liu and Weeks (1998) using the repulsions term to construct the attractions one with retained possibility to change the parameters in the two independently. The third model, MM2, is a minimal one constructed ad hoc and in this article it plays a central role. New scheme for scaling the ODE in vicinal studies is applied towards deciphering the pre-factors in the time-scaling relations. In all these models the patterned surface is self-similar - only one length scale is necessary to describe its evolution (hence B1-type). The bunches form finite angles with the terraces. Integrating numerically the equations for step motion and changing systematically the parameters we obtain the overall dependence of time-scaling exponent β on the power of step-step attractions p as β = 1/(3+p) for MM2 and hypothesize based on restricted set of data that it is β = 1/(5+p) for LW2 and TE2.
Improving quality: one step at a time.
Blankson-seck, N; Butta, P
1999-01-01
The notion that health care workers have the power to improve the quality of their services is a key to AVSC's efforts worldwide. The COPE process, AVSC's low-cost intervention for improving quality at service sites, brings together supervisors and staff at all levels to identify barriers to quality services and helps them find solutions they can implement with their own resources. For example, a hospital in Tanzania had tried unsuccessfully to obtain the funds to repair or replace broken equipment. Using the COPE process, the hospital used available funds to send a technician for training in maintenance and repair. Now everything from blood pressure equipment to bedsprings is repaired promptly, and quality has improved. Another hospital in Tanzania coped with the problem of broken bedsprings (patients were putting mattresses on the floor) by using readily available wire mesh to make repairs. In Kenya, the lack of running water forced staff to collect water from a cistern, taking time from their other responsibilities. During a COPE meeting to resolve the problem the staff bemoaned the fact that they did not have the funds to replace the water system. Then the gardener told the group that all they needed to do was fix a broken pipe. The repair was made at minimal cost, and the water supply was restored. The COPE process reveals that health care staff not only can identify obstacles to quality, they often know the cause of the problem and can offer the best solutions. PMID:12295155
5 CFR 531.244 - Adjusting a GM employee's rate at the time of an annual pay adjustment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Adjusting a GM employee's rate at the... Rules for Gm Employees § 531.244 Adjusting a GM employee's rate at the time of an annual pay adjustment... agency must set the new GS rate for a GM employee as follows: (1) For a GM employee whose GS rate...
5 CFR 531.244 - Adjusting a GM employee's rate at the time of an annual pay adjustment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Adjusting a GM employee's rate at the time of an annual pay adjustment. 531.244 Section 531.244 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER THE GENERAL SCHEDULE Determining Rate of Basic Pay Special Rules for Gm Employees § 531.244 Adjusting...
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2015-04-01
We investigate the operational utility of fine time step hydro-climatic information using a large catchment data set. The originality of this data set lies in the availability of precipitation data from the 6-minute rain gauges of Météo-France, and in the size of the catchment set (217 French catchments in total). The rainfall-runoff model used (GR4) has been adapted to hourly and sub-hourly time steps (up to 6-minute) from the daily time step version (Perrin et al., 2003). The model is applied at different time steps ranging from 6-minute to 1 day (6-, 12-, 30-minute, 1-, 3-, 6-, 12-hour and 1 day) and the evolution of model performance for each catchment is evaluated at the daily time step by aggregation of model outputs. Three classes of behavior are found according to the trend of model performance as the time step becomes finer: (i) catchments presenting an improvement of model performance; (ii) catchments with a model performance insensitive to the time step; (iii) catchments for which the performance even deteriorates as the time step becomes finer. The reasons behind these different trends are investigated from a hydrological point of view, by relating the model sensitivity to data at finer time step to catchment descriptors. References: Perrin, C., C. Michel and V. Andréassian (2003), "Improvement of a parsimonious model for streamflow simulation", Journal of Hydrology, 279(1-4): 275-289.
Time step and shadow Hamiltonian in molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Kim, Sangrak
2015-08-01
We examine the time step and the shadow Hamiltonian of symplectic algorithms for a bound system of a simple harmonic oscillator as a specific example. The phase space trajectory moves on the hyperplane of a constant shadow Hamiltonian. We find a stationary condition for the time step τ n with which the motion repeats itself on the phase space with a period n. Interestingly, that the time steps satisfying the stationary condition turn out to be independent of the symplectic algorithms chosen. Furthermore, the phase volume enclosed by the phase trajectory is given by n τ n Ẽ n , where Ẽ n is the initial shadow energy of the corresponding symplectic algorithm.
Obtaining Runge-Kutta Solutions Between Time Steps
NASA Technical Reports Server (NTRS)
Horn, M. K.
1984-01-01
New interpolation method used with existing Runge-Kutta algorithms. Algorithm evaluates solution at intermediate point within integration step. Only few additional computations required to produce intermediate solution data. Runge-Kutta method provides accurate solution with larger time steps than allowable in other methods.
Improving Leadership and Management Practices: One Step at a Time
ERIC Educational Resources Information Center
Bella, Jill
2008-01-01
Taking small steps toward change is a sensible way to improve the leadership and management practices in an early care and education program. A director must be able to make continuous improvements without alienating staff by asking them to make drastic changes that seem overwhelming and unachievable. Taking on change one step at a time is a way…
Margul, Daniel T; Tuckerman, Mark E
2016-05-10
Molecular dynamics remains one of the most widely used computational tools in the theoretical molecular sciences to sample an equilibrium ensemble distribution and/or to study the dynamical properties of a system. The efficiency of a molecular dynamics calculation is limited by the size of the time step that can be employed, which is dictated by the highest frequencies in the system. However, many properties of interest are connected to low-frequency, long time-scale phenomena, requiring many small time steps to capture. This ubiquitous problem can be ameliorated by employing multiple time-step algorithms, which assign different time steps to forces acting on different time scales. In such a scheme, fast forces are evaluated more frequently than slow forces, and as the former are often computationally much cheaper to evaluate, the savings can be significant. Standard multiple time-step approaches are limited, however, by resonance phenomena, wherein motion on the fastest time scales limits the step sizes that can be chosen for the slower time scales. In atomistic models of biomolecular systems, for example, the largest time step is typically limited to around 5 fs. Previously, we introduced an isokinetic extended phase-space algorithm (Minary et al. Phys. Rev. Lett. 2004, 93, 150201) and its stochastic analog (Leimkuhler et al. Mol. Phys. 2013, 111, 3579) that eliminate resonance phenomena through a set of kinetic energy constraints. In simulations of a fixed-charge flexible model of liquid water, for example, the time step that could be assigned to the slow forces approached 100 fs. In this paper, we develop a stochastic isokinetic algorithm for multiple time-step molecular dynamics calculations using a polarizable model based on fluctuating dipoles. The scheme developed here employs two sets of induced dipole moments, specifically, those associated with short-range interactions and those associated with a full set of interactions. The scheme is demonstrated on
Accuracy-based time step criteria for solving parabolic equations
Mohtar, R.; Segerlind, L.
1995-12-31
Parabolic equations govern many transient engineering problems. Space integration using finite element or finite difference methods changes the parabolic partial differential equation into an ordinary differential equation. Time integration schemes are needed to solve the later equation. In order to accurately perform the later integration a proper time step must be provided. Time step estimates based on a stability criteria have been prescribed in the literature. The following paper presents time step estimates that satisfy stability as well as accuracy criteria. These estimates were correlated to the Froude and Courant Numbers. The later criteria were found to be overly conservative for some integration schemes. Suggestions as to which time integration scheme is the best to use are also presented.
A properly adjusted forage harvester can save time and money
Technology Transfer Automated Retrieval System (TEKTRAN)
A properly adjusted forage harvester can save fuel and increase the realizable milk per ton of your silage. This article details the adjustments necessary to minimize energy while maximizing productivity and forage quality....
Lourenco, D A L; Tsuruta, S; Fragomeni, B O; Chen, C Y; Herring, W O; Misztal, I
2016-03-01
Combining purebreed and crossbreed information is beneficial for genetic evaluation of some livestock species. Genetic evaluations can use relationships based on genomic information, relying on allele frequencies that are breed specific. Single-step genomic BLUP (ssGBLUP) does not account for different allele frequencies, which could limit the genetic gain in crossbreed evaluations. In this study, we tested the performance of different breed-specific genomic relationship matrices () in ssGBLUP for crossbreed evaluations; we also tested the importance of genotyping crossbred animals. Genotypes were available for purebreeds (AA and BB) and crossbreeds (F) in simulated and real pig populations. The number of genotyped animals was, on average, 4,315 for the simulated population and 15,798 for the real population. Cross-validation was performed on 1,200 and 3,117 F animals in the simulated and real populations, respectively. Simulated scenarios were under no artificial selection, mass selection, or BLUP selection. Two genomic relationship matrices were constructed based on breed-specific allele frequencies: 1) , a genomic relationship matrix centered by breed-specific allele frequencies, and 2) , a genomic relationship matrix centered and scaled by breed-specific allele frequencies. All (the across-breed genomic relationship matrix), , and were also tuned to account for selective genotyping. Using breed-specific allele frequencies reduced the number of negative relationships between 2 purebreeds, pulling the average closer to 0, as in the pedigree-based relationship matrix. For simulated populations that included mass selection, genomic EBV (GEBV) in F, when using and , were, on average, 10% more accurate than ; however, after tuning to account for selective genotyping, provided the same accuracy as for breed-specific genomic relationship matrices. For the real population, accuracies for litter size in F were 0.62 for , , and , and tuning had no impact on accuracy
A step in time: Changes in standard-frequency and time-signal broadcasts, 1 January 1972
NASA Technical Reports Server (NTRS)
Chi, A. R.; Fosque, H. S.
1973-01-01
An improved coordinated universal time (UTC) system has been adopted by the International Radio Consultative Committee. It was implemented internationally by the standard-frequency and time-broadcast stations on 1 Jan. 1972. The new UTC system eliminates the frequency offset of 300 parts in 10 to the 10th power between the old UTC and atomic time, thus making the broadcast time interval (the UTC second) constant and defined by the resonant frequency of cesium atoms. The new time scale is kept in synchronism with the rotation of the Earth within plus or minus 0.7 s by step-time adjustments of exactly 1 s, when needed. A time code has been added to the disseminated time signals to permit universal time to be obtained from the broadcasts to the nearest 0.1 s for users requiring such precision. The texts of the International Radio Consultative Committee recommendation and report to implement the new UTC system are given. The coding formats used by various standard time broadcast services to transmit the difference between the universal time (UT1) and the UTC are also given. For users' convenience, worldwide primary VLF and HF transmissions stations, frequencies, and schedules of time emissions are also included. Actual time-step adjustments made by various stations on 1 Jan. 1972, are provided for future reference.
NASA Astrophysics Data System (ADS)
Aronoff, H. I.; Leslie, J. J.; Mittleman, A. N.; Holt, S.
1983-11-01
This manual describes a Shared Time Engineering Program (STEP) conducted by the New England Apparel Manufacturers Association (NEAMA) headquartered in Fall River Massachusetts, and funded by the Office of Trade Adjustment Assistance of the U.S. Department of Commerce. It is addressed to industry association executives, industrial engineers and others interested in examining an innovative model of industrial engineering assistance to small plants which might be adapted to their particular needs.
Stance time and step width variability have unique contributing impairments in older persons.
Brach, Jennifer S; Studenski, Stephanie; Perera, Subashan; VanSwearingen, Jessie M; Newman, Anne B
2008-04-01
Gait variability may have multiple causes. We hypothesized that central nervous system (CNS) impairments would affect motor control and be manifested as increased stance time and step length variability, while sensory impairments would affect balance and be manifested as increased step width variability. Older adults (mean+/-standard deviation (S.D.) age=79.4+/-4.1, n=558) from the Pittsburgh site of the Cardiovascular Health Study participated. The S.D. across steps was the indicator of gait variability, determined for three gait measures, step length, stance time and step width, using a computerized walkway. Impairment measures included CNS function (modified mini-mental state examination, Trails A and B, Digit Symbol Substitution, finger tapping), sensory function (lower extremity (LE) vibration, vision), strength (grip strength, repeated chair stands), mood, and LE pain. Linear regression models were fit for the three gait variability characteristics using impairment measures as independent variables, adjusted for age, race, gender, and height. Analyses were repeated stratified by gait speed. All measures of CNS impairment were directly related to stance time variability (p<0.01), with increased CNS impairment associated with increased stance time variability. CNS impairments were not related to step length or width variability. Both sensory impairments were inversely related to step width (p<0.01) but not step length or stance time variability. CNS impairments affected stance time variability especially in slow walkers while sensory impairments affected step width variability in fast walkers. Specific patterns of gait variability may imply different underlying causes. Types of gait variability should be specified. Interventions may be targeted at specific types of gait variability. PMID:17632004
Dependence of aqua-planet simulations on time step
NASA Astrophysics Data System (ADS)
Williamson, David L.; Olson, Jerry G.
2003-04-01
Aqua-planet simulations with Eulerian and semi-Lagrangian dynamical cores coupled to the NCAR CCM3 parametrization suite produce very different zonal average precipitation patterns. The model with the Eulerian core forms a narrow single precipitation peak centred on the sea surface temperature (SST) maximum. The one with the semi-Lagrangian core forms a broad structure often with a double peak straddling the SST maximum with a precipitation minimum centred on the SST maximum. The different structure is shown to be caused primarily by the different time step adopted by each core and its effect on the parametrizations rather than by different truncation errors introduced by the dynamical cores themselves. With a longer discrete time step, the surface exchange parametrization deposits more moisture in the atmosphere in a single time step, resulting in convection being initiated farther from the equator, closer to the maximum source. Different diffusive smoothing associated with different spectral resolutions is a secondary effect influencing the strength of the double structure. When the semi-Lagrangian core is configured to match the Eulerian with the same time step, a three-time-level formulation and same spectral truncation it produces precipitation fields similar to those from the Eulerian. It is argued that the broad and double structure forms in this model with the longer time step because more water is put into the atmosphere over a longer discrete time step, the evaporation rate being the same. The additional water vapour in the region of equatorial moisture convergence results in more convective available potential energy farther from the equator which allows convection to initiate farther from the equator.The resulting heating drives upward vertical motion and low-level convergence away from the equator, resulting in much weaker upward motion at the equator. The feedback between the convective heating and dynamics reduces the instability at the equator and
Short-term Time Step Convergence in a Climate Model
Wan, Hui; Rasch, Philip J.; Taylor, Mark; Jablonowski, Christiane
2015-02-11
A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to the expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.
Adaptive time steps in trajectory surface hopping simulations.
Spörkel, Lasse; Thiel, Walter
2016-05-21
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling. PMID:27208937
Short-term Time Step Convergence in a Climate Model
Wan, Hui; Rasch, Philip J.; Taylor, Mark; Jablonowski, Christiane
2015-02-11
A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less
Accuracy of Pedometer Steps and Time for Youth with Disabilities
ERIC Educational Resources Information Center
Beets, Michael W.; Combs, Cindy; Pitetti, Kenneth H.; Morgan, Melinda; Bryan, Rebecca R.; Foley, John T.
2007-01-01
The purpose of the study was to examine the accuracy of pedometer steps and activity time (Walk4Life, WL) for youth with developmental disabilities. Eighteen youth (11 girls, 7 boys) 4-14 years completed six 80-meter self-paced walking trials while wearing a pedometer at five waist locations (front right, front left, back right, back left, middle…
Adaptive time steps in trajectory surface hopping simulations
NASA Astrophysics Data System (ADS)
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
Gélat, Thierry; Le Pellec, Armande
2007-12-11
The study examined why anticipatory postural adjustments (APA) associated to gait initiation in a stepping up to a new level situation (SU) are reduced as compared to a level walking situation (LW), as previously reported. Five young adults performed gait initiation in both situations at normal and fast speed. Data from a force platform provided gait parameters related to the motion of the body's centre of mass (CM) on the anteroposterior (progression) and vertical axes. The electromyographic activity of the soleus of the stance limb (SOst) and the vastus lateralis of the swing limb (VLsw) were analyzed prior to and after the onset of the double stance phase. The results showed that APA and progression CM velocity at the time of foot contact were smaller in SU, whereas the peak of this velocity was similar in both situations. Thus, the change in progression velocity during the double stance phase had to be greater in SU than in LW. In both velocity conditions, the activity of SOst stopped after the time of foot contact in both situations, but clearly later in SU. So, this ankle plantar flexor muscle would be involved not only in the change of body lift but also in forward CM progression. The latter role of this muscle brought supporting evidence for the reduction of APA in SU, enabling the peak of progression velocity to be similar in both situations. Only in SU, the timing of activation of VLsw and deactivation of SOst strongly co-varied, showing the implementation of a motor synergy to fulfil the new requirements of the task, i.e. body lift. PMID:17964073
Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.
Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga
2015-11-01
Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection. PMID:26627804
Do we need time adjusted mean platelet volume measurements?
Lancé, Marcus D; van Oerle, Rene; Henskens, Yvonne M C; Marcus, Marco A E
2010-09-01
Mean platelet volume (MPV) is associated with various diseases. Several authors reported anticoagulant and time dependency. Therefore, standardized laboratory methods are essential. The aim of this study was to standardize the MPV measurement. Blood was collected in potassium-ethylenediaminetetra-acid (EDTA) and sodium-citrate tubes. First, MPV and platelet count were determined every half hour for 4 hours in 20 healthy volunteers. The same parameters were acquired from a second group of 100 healthy donors. We measured at the point of highest stability determined in the first step and aimed to determine a reference range. Citrate samples revealed significantly smaller MPV (7.0 fL ± 0.69 standard deviation [SD]) than EDTA (8.0 fL ± 0.8 SD). Platelets swell until 120 minutes in EDTA and until 60 minutes in citrate. Mean platelet count changed significantly in citrate. In the second group, no inverse correlation between MPV and platelet count was seen. A reference range was calculated (EDTA, 7.2-10.8 fL; citrate, 6.1-9.5 fL). Platelets stored in citrate are significantly smaller compared to those stored in EDTA. Timing is important when measuring platelet volume. Optimal measuring time should be 120 minutes after venipuncture. For this we depicted a reference range. Platelet count is most stable in EDTA. There was no inverse relation between MPV and platelet count. PMID:20858586
Multiple time step integrators in ab initio molecular dynamics
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
NASA Astrophysics Data System (ADS)
Yu, Chunxue; Yin, Xin'an; Yang, Zhifeng; Cai, Yanpeng; Sun, Tao
2016-09-01
The time step used in the operation of eco-friendly reservoirs has decreased from monthly to daily, and even sub-daily. The shorter time step is considered a better choice for satisfying downstream environmental requirements because it more closely resembles the natural flow regime. However, little consideration has been given to the influence of different time steps on the ability to simultaneously meet human and environmental flow requirements. To analyze this influence, we used an optimization model to explore the relationships among the time step, environmental flow (e-flow) requirements, and human water needs for a wide range of time steps and e-flow scenarios. We used the degree of hydrologic alteration to evaluate the regime's ability to satisfy the e-flow requirements of riverine ecosystems, and used water supply reliability to evaluate the ability to satisfy human needs. We then applied the model to a case study of China's Tanghe Reservoir. We found four efficient time steps (2, 3, 4, and 5 days), with a remarkably high water supply reliability (around 80%) and a low alteration of the flow regime (<35%). Our analysis of the hydrologic alteration revealed the smallest alteration at time steps ranging from 1 to 7 days. However, longer time steps led to higher water supply reliability to meet human needs under several e-flow scenarios. Our results show that adjusting the time step is a simple way to improve reservoir operation performance to balance human and e-flow needs.
A method for improving time-stepping numerics
NASA Astrophysics Data System (ADS)
Williams, P. D.
2012-04-01
In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.
Temporal Adjustment in Academic Labor Markets: Time to Ph.D. AIR Forum Paper 1978.
ERIC Educational Resources Information Center
Kuh, Charlotte V.
A research project is described that concerns "temporal adjustment" as one form of a non-wage adjustment in the academic labor market. Receipt of the doctorate, the number and length of post-doctoral fellowships, and the achievement of tenure are temporal factors in academic careers. The change in timing of these factors is a form of adjustment to…
Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping
NASA Technical Reports Server (NTRS)
Suresh, A.; Huynh, H. T.
1997-01-01
A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.
Adjustment to time of use pricing: Persistence of habits or change
NASA Astrophysics Data System (ADS)
Rebello, Derrick Michael
1999-11-01
Generally the dynamics related to residential electricity consumption under TOU rates have not been analyzed completely. A habit persistence model is proposed to account for the dynamics that may be present as a result of recurring habits or lack of information about the effects of shifting load across TOU periods. In addition, the presence of attrition bias necessitated a two-step estimation approach. The decision to remain in the program modeled in the first-step, while demand for electricity was estimated in the second-step. Results show that own-price effects and habit persistence have the most significant effect the model. The habit effects, which while small in absolute terms, are significant. Elasticity estimates show that electricity consumption is inelastic during all periods of the day. Estimates of the long-run elasticities were nearly identical to short-run estimates, showing little or no adjustment across time. Cross-price elasticities indicate a willingness to substitute consumption across periods implying that TOU goods are weak substitutes. The most significant substitution occurs during the period of 5:00 PM to 9:00 PM, when most individuals are likely to be home and active.
31 CFR 501.737 - Adjustments of time, postponements and adjournments.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Adjustments of time, postponements and adjournments. 501.737 Section 501.737 Money and Finance: Treasury Regulations Relating to Money and Finance... REGULATIONS Trading With the Enemy Act (TWEA) Penalties § 501.737 Adjustments of time, postponements...
31 CFR 501.737 - Adjustments of time, postponements and adjournments.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Adjustments of time, postponements and adjournments. 501.737 Section 501.737 Money and Finance: Treasury Regulations Relating to Money and Finance... REGULATIONS Trading With the Enemy Act (TWEA) Penalties § 501.737 Adjustments of time, postponements...
31 CFR 501.737 - Adjustments of time, postponements and adjournments.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Adjustments of time, postponements and adjournments. 501.737 Section 501.737 Money and Finance: Treasury Regulations Relating to Money and Finance... REGULATIONS Trading With the Enemy Act (TWEA) Penalties § 501.737 Adjustments of time, postponements...
31 CFR 501.737 - Adjustments of time, postponements and adjournments.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Adjustments of time, postponements and adjournments. 501.737 Section 501.737 Money and Finance: Treasury Regulations Relating to Money and Finance... REGULATIONS Trading With the Enemy Act (TWEA) Penalties § 501.737 Adjustments of time, postponements...
Multiple-time-stepping generalized hybrid Monte Carlo methods
NASA Astrophysics Data System (ADS)
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Bourret, S.C.; Swansen, J.E.
1982-07-02
A stepping motor is microprocessor controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.
Bourret, Steven C.; Swansen, James E.
1984-01-01
A stepping motor is microprocessingly controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.
Daily Time Step Refinement of Optimized Flood Control Rule Curves for a Global Warming Scenario
NASA Astrophysics Data System (ADS)
Lee, S.; Fitzgerald, C.; Hamlet, A. F.; Burges, S. J.
2009-12-01
Pacific Northwest temperatures have warmed by 0.8 °C since 1920 and are predicted to further increase in the 21st century. Simulated streamflow timing shifts associated with climate change have been found in past research to degrade water resources system performance in the Columbia River Basin when using existing system operating policies. To adapt to these hydrologic changes, optimized flood control operating rule curves were developed in a previous study using a hybrid optimization-simulation approach which rebalanced flood control and reservoir refill at a monthly time step. For the climate change scenario, use of the optimized flood control curves restored reservoir refill capability without increasing flood risk. Here we extend the earlier studies using a detailed daily time step simulation model applied over a somewhat smaller portion of the domain (encompassing Libby, Duncan, and Corra Linn dams, and Kootenai Lake) to evaluate and refine the optimized flood control curves derived from monthly time step analysis. Moving from a monthly to daily analysis, we found that the timing of flood control evacuation needed adjustment to avoid unintended outcomes affecting Kootenai Lake. We refined the flood rule curves derived from monthly analysis by creating a more gradual evacuation schedule, but kept the timing and magnitude of maximum evacuation the same as in the monthly analysis. After these refinements, the performance at monthly time scales reported in our previous study proved robust at daily time scales. Due to a decrease in July storage deficits, additional benefits such as more revenue from hydropower generation and more July and August outflow for fish augmentation were observed when the optimized flood control curves were used for the climate change scenario.
Effects of Timing of Adversity on Adolescent and Young Adult Adjustment
Kiff, Cara J.; Cortes, Rebecca; Lengua, Lilana; Kosterman, Rick; Hawkins, J. David; Mason, W. Alex
2012-01-01
Effects of Timing of Adversity on Adolescent and Young Adult Adjustment Abstract Exposure to adversity during childhood and adolescence predicts adjustment across development. Further, adolescent adjustment problems persist into young adulthood. This study examined relations of contextual adversity with concurrent adolescent adjustment and prospective mental health and health outcomes in young adulthood. A longitudinal sample (N = 808) was followed from age 10 through 27. Perceptions of neighborhood in childhood predicted depression, alcohol use disorders, and HIV risk in young adulthood. Further, the timing of adversity was important in determining the type of problem experienced in adulthood. Youth adjustment predicted adult outcomes, and in some cases, mediated the relation between adversity and outcomes. These findings support the importance of adversity in predicting adjustment and elucidate factors that affect outcomes into young adulthood. PMID:22754271
Effects of Timing of Adversity on Adolescent and Young Adult Adjustment.
Kiff, Cara J; Cortes, Rebecca; Lengua, Lilana; Kosterman, Rick; Hawkins, J David; Mason, W Alex
2012-06-01
Effects of Timing of Adversity on Adolescent and Young Adult Adjustment Abstract Exposure to adversity during childhood and adolescence predicts adjustment across development. Further, adolescent adjustment problems persist into young adulthood. This study examined relations of contextual adversity with concurrent adolescent adjustment and prospective mental health and health outcomes in young adulthood. A longitudinal sample (N = 808) was followed from age 10 through 27. Perceptions of neighborhood in childhood predicted depression, alcohol use disorders, and HIV risk in young adulthood. Further, the timing of adversity was important in determining the type of problem experienced in adulthood. Youth adjustment predicted adult outcomes, and in some cases, mediated the relation between adversity and outcomes. These findings support the importance of adversity in predicting adjustment and elucidate factors that affect outcomes into young adulthood. PMID:22754271
A new time-stepping method for regional climate models
NASA Astrophysics Data System (ADS)
Williams, P. D.
2010-12-01
The dynamical cores of many regional climate models use the Robert-Asselin filter to suppress the spurious computational mode of the leapfrog scheme. Unfortunately, whilst successfully eliminating the unwanted mode, the Robert-Asselin filter also weakly suppresses the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the filter does not conserve the mean state, averaged over the three time slices on which it operates. This presentation proposes a simple modification to the Robert-Asselin filter, which does conserve the three-time-level mean state. When used in conjunction with the leapfrog scheme, the modification vastly reduces the artificial damping of the physical solution. Correspondingly, the modification increases the numerical accuracy for amplitude errors by two orders, yielding third-order accuracy. The modified filter may easily be incorporated into existing regional climate models, via the addition of only a few lines of code that are computationally very inexpensive. Results will be shown from recent implementations of the modified filter in various models. The modification will be shown to reduce model biases and to significantly improve the predictive skill. Magnitude of the complex amplification factor as a function of the non-dimensional time step, for leapfrog integrations. This quantity would be identical to 1 for a perfect numerical scheme. Clearly, the filter proposed here (case α=0.53) has much smaller numerical errors than the original Robert-Asselin filter (case α=1). Moreover, the proposed filter is trivial to implement and is no more computationally expensive. Taken from Williams (2009; Monthly Weather Review).
Watching Proteins Direct Crystal Growth One Step at a Time
2009-01-01
Researchers at Berkeley Labs Molecular Foundry use an atomic force microscope to record this movie of a peptide being adsorbed to a crystal surface while two successive crystal steps interact, then progress beyond the peptide. The peptide temporarily slows the step before transferring up to the next atomic layer. The lattice pattern on the surface corresponds to the molecular structure of the underlying crystal.
The USMLE Step 2 CS: Time for a change.
Alvin, Matthew D
2016-08-01
The United States Medical Licensing Examination (USMLE(®)) Steps are a series of mandatory licensing assessments for all allopathic (MD degree) medical students in their transition from student to intern to resident physician. Steps 1, 2 Clinical Knowledge (CK), and 3 are daylong multiple-choice exams that quantify a medical student's basic science and clinical knowledge as well as their application of that knowledge using a three-digit score. In doing so, these Steps provide a standardized assessment that residency programs use to differentiate applicants and evaluate their competitiveness. Step 2 Clinical Skills (CS), the only other Step exam and the second component of Step 2, was created in 2004 to test clinical reasoning and patient-centered skills. As a Pass/Fail exam without a numerical scoring component, Step 2 CS provides minimal differentiation among applicants for residency programs. In this personal view article, it is argued that the current Step 2 CS exam should be eliminated for US medical students and propose an alternative consistent with the mission and purpose of the exam that imposes less of a burden on medical students. PMID:27007882
Constrained Density Functional Theory by Imaginary Time-Step Method
NASA Astrophysics Data System (ADS)
Kidd, Daniel
Constrained Density Functional Theory (CDFT) has been a popular choice within the last decade for sidestepping the self interaction problem within long-range charge transfer calculations. Typically an inner constraint loop is added within the self-consistent field iterations of DFT in order to enforce this charge transfer state by means of a Lagrange multiplier method. In this work, an alternate implementation of CDFT is introduced, that of the imaginary time-step method, which lends itself more readily to real space calculations in the ability to solve numerically for 3D local external potentials which enforce arbitrary given densities. This method has been shown to reproduce the proper 1 / R dependence of charge transfer systems in real space calculations as well as the ability to generate useful constraint potentials. As an example application, this method is shown to be capable of describing defects within periodic systems using finite calculations by constraining the 3D density to that of the periodically calculated perfect system at the boundaries.
DNA walks one step at a time in electrophoresis
NASA Astrophysics Data System (ADS)
Guan, Juan; Wang, Bo; Granick, Steve
2011-03-01
Testing the classical view that in DNA gel electrophoresis, long polymer chains navigate through their gel environment via reptation, we reach a different conclusion: this driven motion proceeds by stick-slip. Our single-molecule experiments visualize fluorescent-labeled lambda-DNA, whose intramolecular conformations are resolved with 30 ms resolution using home-written software. Combining hundreds to thousands of trajectories under amplitudes of electric field ranging from zero to large, we quantify the full statistical distribution of motion with unprecedented statistics. Pauses are seen between steps of driven motion, probably reflecting that the chain is trapped inside the gel matrix. The pausing time is exponentially distributed and decreases with increasing electric field strength, suggesting that the jerky behavior is an activated process, facilitated by electric field. We propose a stretch-assisted mechanism: that the energy barrier to move through the gel environment is first overcome by a leading segment, the ensuing intramolecular stress from stretching causing lagging segments to recoil and follow along.
Automatic multirate methods for ordinary differential equations. [Adaptive time steps
Gear, C.W.
1980-01-01
A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.
McCrorie, P Rw; Duncan, E; Granat, M H; Stansfield, B W
2012-11-01
Evidence suggests that behaviours such as standing are beneficial for our health. Unfortunately, little is known of the prevalence of this state, its importance in relation to time spent stepping or variation across seasons. The aim of this study was to quantify, in young adolescents, the prevalence and seasonal changes in time spent upright and not stepping (UNSt(time)) as well as time spent upright and stepping (USt(time)), and their contribution to overall upright time (U(time)). Thirty-three adolescents (12.2 ± 0.3 y) wore the activPAL activity monitor during four school days on two occasions: November/December (winter) and May/June (summer). UNSt(time) contributed 60% of daily U(time) at winter (Mean = 196 min) and 53% at summer (Mean = 171 min); a significant seasonal effect, p < 0.001. USt(time) was significantly greater in summer compared to winter (153 min versus 131 min, p < 0.001). The effects in UNSt(time) could be explained through significant seasonal differences during the school hours (09:00-16:00), whereas the effects in USt(time) could be explained through significant seasonal differences in the evening period (16:00-22:00). Adolescents spent a greater amount of time upright and not stepping than they did stepping, in both winter and summer. The observed seasonal effects for both UNSt(time) and USt(time) provide important information for behaviour change intervention programs. PMID:23111187
One step at a time: endoplasmic reticulum-associated degradation
Vembar, Shruthi S.; Brodsky, Jeffrey L.
2009-01-01
Protein folding in the endoplasmic reticulum (ER) is monitored by ER quality control (ERQC) mechanisms. Proteins that pass ERQC criteria traffic to their final destinations through the secretory pathway, whereas non-native and unassembled subunits of multimeric proteins are degraded by the ER-associated degradation (ERAD) pathway. During ERAD, molecular chaperones and associated factors recognize and target substrates for retrotranslocation to the cytoplasm, where they are degraded by the ubiquitin–proteasome machinery. The discovery of diseases that are associated with ERAD substrates highlights the importance of this pathway. Here, we summarize our current understanding of each step during ERAD, with emphasis on the factors that catalyse distinct activities. PMID:19002207
PIC Algorithm with Multiple Poisson Equation Solves During One Time Step
NASA Astrophysics Data System (ADS)
Ren, Junxue; Godar, Trenton; Menart, James; Mahalingam, Sudhakar; Choi, Yongjun; Loverich, John; Stoltz, Peter H.
2015-09-01
In order to reduce the overall computational time of a PIC (particle-in-cell) computer simulation, an attempt was made to utilize larger time step sizes by implementing multiple solutions of Poisson's equation within one time step. The hope was this would make the PIC simulation stable at larger time steps than an explicit technique can use, and using larger time steps would reduce the overall computational time, even though the computational time per time step would increase. A three-dimensional PIC code that tracks electrons and ions throughout a three-dimensional Cartesian computational domain is used to perform this study. The results of altering the number of times Poisson's equation is solved during a single time step are presented. Also, the size of the time that can be used and still maintain a stable solution is surveyed. The results indicate that using multiple Poisson solves during one time step provides some ability to use larger time steps in PIC simulations, but the increase in time step size is not significant and the overall simulation run time is not reduced
ERIC Educational Resources Information Center
Lam, Chun Bun; McHale, Susan M.; Crouter, Ann C.
2012-01-01
The development and adjustment correlates of parent-child social (parent, child, and others present) and dyadic time (only parent and child present) from age 8 to 18 were examined. Mothers, fathers, and firstborns and secondborns from 188 White families participated in both home and nightly phone interviews. Social time declined across…
Empirical versus time stepping with embedded error control for density-driven flow in porous media
NASA Astrophysics Data System (ADS)
Younes, Anis; Ackerer, Philippe
2010-08-01
Modeling density-driven flow in porous media may require very long computational time due to the nonlinear coupling between flow and transport equations. Time stepping schemes are often used to adapt the time step size in order to reduce the computational cost of the simulation. In this work, the empirical time stepping scheme which adapts the time step size according to the performance of the iterative nonlinear solver is compared to an adaptive time stepping scheme where the time step length is controlled by the temporal truncation error. Results of the simulations of the Elder problem show that (1) the empirical time stepping scheme can lead to inaccurate results even with a small convergence criterion, (2) accurate results are obtained when the time step size selection is based on the truncation error control, (3) a non iterative scheme with proper time step management can be faster and leads to more accurate solution than the standard iterative procedure with the empirical time stepping and (4) the temporal truncation error can have a significant effect on the results and can be considered as one of the reasons for the differences observed in the Elder numerical results.
Competencies for Part-Time Faculty--the First Step.
ERIC Educational Resources Information Center
Haddad, Margaret; Dickens, Mary Ellen
1978-01-01
Discusses hiring, evaluation, involvement, and competencies of the increasing number of part-time teachers in colleges throughout the country, and the unclear expectations placed on them. Includes a competencies questionnaire for part-time instructors developed at Caldwell Community College and Technical Institute.
Viral DNA Packaging: One Step at a Time
NASA Astrophysics Data System (ADS)
Bustamante, Carlos; Moffitt, Jeffrey R.
During its life-cycle the bacteriophage φ29 actively packages its dsDNA genome into a proteinacious capsid, compressing its genome to near crystalline densities against large electrostatic, elastic, and entropic forces. This remarkable process is accomplished by a nano-scale, molecular DNA pump - a complex assembly of three protein and nucleic acid rings which utilizes the free energy released in ATP hydrolysis to perform the mechanical work necessary to overcome these large energetic barriers. We have developed a single molecule optical tweezers assay which has allowed us to probe the detailed mechanism of this packaging motor. By following the rate of packaging of a single bacteriophage as the capsid is filled with genome and as a function of optically applied load, we find that the compression of the genome results in the build-up of an internal force, on the order of ˜ 55 pN, due to the compressed genome. The ability to work against such large forces makes the packaging motor one of the strongest known molecular motors. By titrating the concentration of ATP, ADP, and inorganic phosphate at different opposing load, we are able to determine features of the mechanochemistry of this motor - the coupling between the mechanical and chemical cycles. We find that force is generated not upon binding of ATP, but rather upon release of hydrolysis products. Finally, by improving the resolution of the optical tweezers assay, we are able to observe the discrete increments of DNA encapsidated each cycle of the packaging motor. We find that DNA is packaged in 10-bp increments preceded by the binding of multiple ATPs. The application of large external forces slows the packaging rate of the motor, revealing that the 10-bp steps are actually composed of four 2.5-bp steps which occur in rapid succession. These data show that the individual subunits of the pentameric ring-ATPase at the core of the packaging motor are highly coordinated, with the binding of ATP and the
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2009-01-01
We perform a stability analysis of a Monte Carlo method for simulating the Compton scattering of photons by free electron in high energy density applications and develop time-step limits that avoid unstable and oscillatory solutions. Implementing this Monte Carlo technique in multi physics problems typically requires evaluating the material temperature at its beginning-of-time-step value, which can lead to this undesirable behavior. With a set of numerical examples, we demonstrate the efficacy of our time-step limits.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Importance of variable time-step algorithms in spatial kinetics calculations
Aviles, B.N.
1994-12-31
The use of spatial kinetics codes in conjunction with advanced thermal-hydraulics codes is becoming more widespread as better methods and faster computers appear. The integrated code packages are being used for routine nuclear power plant design and analysis, including simulations with instrumentation and control systems initiating system perturbations such as rod motion and scrams. As a result, it is important to include a robust variable time-step algorithm that can accurately and efficiently follow widely varying plant neutronic behavior. This paper describes the variable time-step algorithm in SPANDEX and compares the automatic time-step scheme with a more traditional fixed time-step scheme.
IMPROVEMENTS TO THE TIME STEPPING ALGORITHM OF RELAP5-3D
Cumberland, R.; Mesina, G.
2009-01-01
The RELAP5-3D time step method is used to perform thermo-hydraulic and neutronic simulations of nuclear reactors and other devices. It discretizes time and space by numerically solving several differential equations. Previously, time step size was controlled by halving or doubling the size of a previous time step. This process caused the code to run slower than it potentially could. In this research project, the RELAP5-3D time step method was modifi ed to allow a new method of changing time steps to improve execution speed and to control error. The new RELAP5-3D time step method being studied involves making the time step proportional to the material courant limit (MCL), while insuring that the time step does not increase by more than a factor of two between advancements. As before, if a step fails or mass error is excessive, the time step is cut in half. To examine performance of the new method, a measure of run time and a measure of error were plotted against a changing MCL proportionality constant (m) in seven test cases. The removal of the upper time step limit produced a small increase in error, but a large decrease in execution time. The best value of m was found to be 0.9. The new algorithm is capable of producing a signifi cant increase in execution speed, with a relatively small increase in mass error. The improvements made are now under consideration for inclusion as a special option in the RELAP5-3D production code.
5 CFR 551.424 - Time spent adjusting grievances or performing representational functions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Time spent adjusting grievances or performing representational functions. 551.424 Section 551.424 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Hours of...
ERIC Educational Resources Information Center
Liu, Junsheng; Chen, Xinyin; Li, Dan; French, Doran
2012-01-01
The market-oriented economic reform in China over the past two decades has resulted in considerable changes in social attitudes regarding youth's behaviors. This study examined the relations of shyness and aggression to adjustment in Chinese adolescents at different historical times. Participants came from two cohorts (1994 and 2008) of…
31 CFR 501.737 - Adjustments of time, postponements and adjournments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Adjustments of time, postponements and adjournments. 501.737 Section 501.737 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY...
Reciprocal Relations between Children's Sleep and Their Adjustment over Time
ERIC Educational Resources Information Center
Kelly, Ryan J.; El-Sheikh, Mona
2014-01-01
Child sleep and adjustment research with community samples is on the rise with a recognized need of explicating this association. We examined reciprocal relations between children's sleep and their internalizing and externalizing symptoms using 3 waves of data spanning 5 years. Participants included 176 children at Time 1 (M = 8.68 years; 69%…
Adjustment of sleep and the circadian temperature rhythm after flights across nine time zones
NASA Technical Reports Server (NTRS)
Gander, Philippa H.; Myhre, Grete; Graeber, R. Curtis; Lauber, John K.; Andersen, Harald T.
1989-01-01
The adjustment of sleep-wake patterns and the circadian temperature rhythm was monitored in nine Royal Norwegian Airforce volunteers operating P-3 aircraft during a westward training deployment across nine time zones. Subjects recorded all sleep and nap times, rated nightly sleep quality, and completed personality inventories. Rectal temperature, heart rate, and wrist activity were continuously monitored. Adjustment was slower after the return eastward flight than after the outbound westward flight. The eastward flight produced slower readjustment of sleep timing to local time and greater interindividual variability in the patterns of adjustment of sleep and temperature. One subject apparently exhibited resynchronization by partition, with the temperature rhythm undergoing the reciprocal 15-h delay. In contrast, average heart rates during sleep were significantly elevated only after westward flight. Interindividual differences in adjustment of the temperature rhythm were correlated with some of the personality measures. Larger phase delays in the overall temperature waveform (as measured on the 5th day after westward flight) were exhibited by extraverts, and less consistently by evening types.
Two-stepping through time: mammals and viruses.
Meyerson, Nicholas R; Sawyer, Sara L
2011-06-01
Recent studies have identified ancient virus genomes preserved as fossils within diverse animal genomes. These fossils have led to the revelation that a broad range of mammalian virus families are older and more ubiquitous than previously appreciated. Long-term interactions between viruses and their hosts often develop into genetic arms races where both parties continually jockey for evolutionary dominance. It is difficult to imagine how mammalian hosts have kept pace in the evolutionary race against rapidly evolving viruses over large expanses of time, given their much slower evolutionary rates. However, recent data has begun to reveal the evolutionary strategy of slowly-evolving hosts. We review these data and suggest a modified arms race model where the evolutionary possibilities of viruses are relatively constrained. Such a model could allow more accurate forecasting of virus evolution. PMID:21531564
Adjustment of wind-drift effect for real-time systematic error correction in radar rainfall data
NASA Astrophysics Data System (ADS)
Dai, Qiang; Han, Dawei; Zhuo, Lu; Huang, Jing; Islam, Tanvir; Zhang, Shuliang
An effective bias correction procedure using gauge measurement is a significant step for radar data processing to reduce the systematic error in hydrological applications. In these bias correction methods, the spatial matching of precipitation patterns between radar and gauge networks is an important premise. However, the wind-drift effect on radar measurement induces an inconsistent spatial relationship between radar and gauge measurements as the raindrops observed by radar do not fall vertically to the ground. Consequently, a rain gauge does not correspond to the radar pixel based on the projected location of the radar beam. In this study, we introduce an adjustment method to incorporate the wind-drift effect into a bias correlation scheme. We first simulate the trajectory of raindrops in the air using downscaled three-dimensional wind data from the weather research and forecasting model (WRF) and calculate the final location of raindrops on the ground. The displacement of rainfall is then estimated and a radar-gauge spatial relationship is reconstructed. Based on this, the local real-time biases of the bin-average radar data were estimated for 12 selected events. Then, the reference mean local gauge rainfall, mean local bias, and adjusted radar rainfall calculated with and without consideration of the wind-drift effect are compared for different events and locations. There are considerable differences for three estimators, indicating that wind drift has a considerable impact on the real-time radar bias correction. Based on these facts, we suggest bias correction schemes based on the spatial correlation between radar and gauge measurements should consider the adjustment of the wind-drift effect and the proposed adjustment method is a promising solution to achieve this.
2013-01-01
Background Abattoir condemnation data show promise as a rich source of data for syndromic surveillance of both animal and zoonotic diseases. However, inherent characteristics of abattoir condemnation data can bias results from space-time cluster detection methods for disease surveillance, and may need to be accounted for using various adjustment methods. The objective of this study was to compare the space-time scan statistics with different abilities to control for covariates and to assess their suitability for food animal syndromic surveillance. Four space-time scan statistic models were used including: animal class adjusted Poisson, space-time permutation, multi-level model adjusted Poisson, and a weighted normal scan statistic using model residuals. The scan statistics were applied to monthly bovine pneumonic lung and “parasitic liver” condemnation data from Ontario provincial abattoirs from 2001–2007. Results The number and space-time characteristics of identified clusters often varied between space-time scan tests for both “parasitic liver” and pneumonic lung condemnation data. While there were some similarities between isolated clusters in space, time and/or space-time, overall the results from space-time scan statistics differed substantially depending on the covariate adjustment approach used. Conclusions Variability in results among methods suggests that caution should be used in selecting space-time scan methods for abattoir surveillance. Furthermore, validation of different approaches with simulated or real outbreaks is required before conclusive decisions can be made concerning the best approach for conducting surveillance with these data. PMID:24246040
Operational flood control of a low-lying delta system using large time step Model Predictive Control
NASA Astrophysics Data System (ADS)
Tian, Xin; van Overloop, Peter-Jules; Negenborn, Rudy R.; van de Giesen, Nick
2015-01-01
The safety of low-lying deltas is threatened not only by riverine flooding but by storm-induced coastal flooding as well. For the purpose of flood control, these deltas are mostly protected in a man-made environment, where dikes, dams and other adjustable infrastructures, such as gates, barriers and pumps are widely constructed. Instead of always reinforcing and heightening these structures, it is worth considering making the most of the existing infrastructure to reduce the damage and manage the delta in an operational and overall way. In this study, an advanced real-time control approach, Model Predictive Control, is proposed to operate these structures in the Dutch delta system (the Rhine-Meuse delta). The application covers non-linearity in the dynamic behavior of the water system and the structures. To deal with the non-linearity, a linearization scheme is applied which directly uses the gate height instead of the structure flow as the control variable. Given the fact that MPC needs to compute control actions in real-time, we address issues regarding computational time. A new large time step scheme is proposed in order to save computation time, in which different control variables can have different control time steps. Simulation experiments demonstrate that Model Predictive Control with the large time step setting is able to control a delta system better and much more efficiently than the conventional operational schemes.
Convergence Acceleration for Multistage Time-Stepping Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli L.; Rossow, C-C; Vasta, V. N.
2006-01-01
The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 could be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. Numerical dissipation operators (based on the Roe scheme, a matrix formulation, and the CUSP scheme) as well as the number of RK stages are considered in evaluating the RK/implicit scheme. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. In two dimensions, turbulent flows over an airfoil at subsonic and transonic conditions are computed. The effects of mesh cell aspect ratio on convergence are investigated for Reynolds numbers between 5.7 x 10(exp 6) and 100.0 x 10(exp 6). Results are also obtained for a transonic wing flow. For both 2-D and 3-D problems, the computational time of a well-tuned standard RK scheme is reduced at least a factor of four.
A Dynamic Era-Based Time-Symmetric Block Time-Step Algorithm with Parallel Implementations
NASA Astrophysics Data System (ADS)
Kaplan, Murat; Saygin, Hasan
2012-06-01
The time-symmetric block time-step (TSBTS) algorithm is a newly developed efficient scheme for N-body integrations. It is constructed on an era-based iteration. In this work, we re-designed the TSBTS integration scheme with a dynamically changing era size. A number of numerical tests were performed to show the importance of choosing the size of the era, especially for long-time integrations. Our second aim was to show that the TSBTS scheme is as suitable as previously known schemes for developing parallel N-body codes. In this work, we relied on a parallel scheme using the copy algorithm for the time-symmetric scheme. We implemented a hybrid of data and task parallelization for force calculation to handle load balancing problems that can appear in practice. Using the Plummer model initial conditions for different numbers of particles, we obtained the expected efficiency and speedup for a small number of particles. Although parallelization of the direct N-body codes is negatively affected by the communication/calculation ratios, we obtained good load-balanced results. Moreover, we were able to conserve the advantages of the algorithm (e.g., energy conservation for long-term simulations).
NASA Astrophysics Data System (ADS)
Cohen, W. B.; Yang, Z.; Stehman, S.; Huang, C.; Healey, S. P.
2013-12-01
Forest ecosystem process models require spatially and temporally detailed disturbance data to accurately predict fluxes of carbon or changes in biodiversity over time. A variety of new mapping algorithms using dense Landsat time series show great promise for providing disturbance characterizations at an annual time step. These algorithms provide unprecedented detail with respect to timing, magnitude, and duration of individual disturbance events, and causal agent. But all maps have error and disturbance maps in particular can have significant omission error because many disturbances are relatively subtle. Because disturbance, although ubiquitous, can be a relatively rare event spatially in any given year, omission errors can have a great impact on mapped rates. Using a high quality reference disturbance dataset, it is possible to not only characterize map errors but also to adjust mapped disturbance rates to provide unbiased rate estimates with confidence intervals. We present results from a national-level disturbance mapping project (the North American Forest Dynamics project) based on the Vegetation Change Tracker (VCT) with annual Landsat time series and uncertainty analyses that consist of three basic components: response design, statistical design, and analyses. The response design describes the reference data collection, in terms of the tool used (TimeSync), a formal description of interpretations, and the approach for data collection. The statistical design defines the selection of plot samples to be interpreted, whether stratification is used, and the sample size. Analyses involve derivation of standard agreement matrices between the map and the reference data, and use of inclusion probabilities and post-stratification to adjust mapped disturbance rates. Because for NAFD we use annual time series, both mapped and adjusted rates are provided at an annual time step from ~1985-present. Preliminary evaluations indicate that VCT captures most of the higher
Effect of time-activity adjustment on exposure assessment for traffic-related ultrafine particles
Lane, Kevin J; Levy, Jonathan I; Scammell, Madeleine Kangsen; Patton, Allison P; Durant, John L; Mwamburi, Mkaya; Zamore, Wig; Brugge, Doug
2015-01-01
Exposures to ultrafine particles (<100 nm, estimated as particle number concentration, PNC) differ from ambient concentrations because of the spatial and temporal variability of both PNC and people. Our goal was to evaluate the influence of time-activity adjustment on exposure assignment and associations with blood biomarkers for a near-highway population. A regression model based on mobile monitoring and spatial and temporal variables was used to generate hourly ambient residential PNC for a full year for a subset of participants (n=140) in the Community Assessment of Freeway Exposure and Health study. We modified the ambient estimates for each hour using personal estimates of hourly time spent in five micro-environments (inside home, outside home, at work, commuting, other) as well as particle infiltration. Time-activity adjusted (TAA)-PNC values differed from residential ambient annual average (RAA)-PNC, with lower exposures predicted for participants who spent more time away from home. Employment status and distance to highway had a differential effect on TAA-PNC. We found associations of RAA-PNC with high sensitivity C-reactive protein and Interleukin-6, although exposure-response functions were non-monotonic. TAA-PNC associations had larger effect estimates and linear exposure-response functions. Our findings suggest that time-activity adjustment improves exposure assessment for air pollutants that vary greatly in space and time. PMID:25827314
NASA Astrophysics Data System (ADS)
Hirthe, E. M.; Graf, T.
2012-04-01
Fluid density variations occur due to changes in the solute concentration, temperature and pressure of groundwater. Examples are interaction between freshwater and seawater, radioactive waste disposal, groundwater contamination, and geothermal energy production. The physical coupling between flow and transport introduces non-linearity in the governing mathematical equations, such that solving variable-density flow problems typically requires very long computational time. Computational efficiency can be attained through the use of adaptive time-stepping schemes. The aim of this work is therefore to apply a non-iterative adaptive time-stepping scheme based on local truncation error in variable-density flow problems. That new scheme is implemented into the code of the HydroGeoSphere model (Therrien et al., 2011). The new time-stepping scheme is applied to the Elder (1967) and the Shikaze et al. (1998) problem of free convection in porous and fractured-porous media, respectively. Numerical simulations demonstrate that non-iterative time-stepping based on local truncation error control fully automates the time step size and efficiently limits the temporal discretization error to the user-defined tolerance. Results of the Elder problem show that the new time-stepping scheme presented here is significantly more efficient than uniform time-stepping when high accuracy is required. Results of the Shikaze problem reveal that the new scheme is considerably faster than conventional time-stepping where time step sizes are either constant or controlled by absolute head/concentration changes. Future research will focus on the application of the new time-stepping scheme to variable-density flow in complex real-world fractured-porous rock.
An adaptive time-stepping strategy for solving the phase field crystal model
Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.
Sensitivity of a thermodynamic sea ice model with leads to time step size
NASA Technical Reports Server (NTRS)
Ledley, T. S.
1985-01-01
The characteristics of sea ice models, developed to study the physics of the growth and melt of ice at the ocean surface and the variations in ice extent, depend on the size of the time step. Thus, to study longer-term variations within a reasonable computer budget, a model with a scheme allowing longer time steps has been constructed. However, the results produced by the model can definitely depend on the length of the time step. The sensitivity of a model to time-step size can be reduced by appropriate approaches. The present investigation is concerned with experiments which use a formulation of a lead parameterization that can be considered as a first step toward the development of a lead parameterization suitable for a use in long-term climate studies.
An Explicit Super-Time-Stepping Scheme for Non-Symmetric Parabolic Problems
NASA Astrophysics Data System (ADS)
Gurski, K. F.; O'Sullivan, S.
2010-09-01
Explicit numerical methods for the solution of a system of differential equations may suffer from a time step size that approaches zero in order to satisfy stability conditions. When the differential equations are dominated by a skew-symmetric component, the problem is that the real eigenvalues are dominated by imaginary eigenvalues. We compare results for stable time step limits for the super-time-stepping method of Alexiades, Amiez, and Gremaud (super-time-stepping methods belong to the Runge-Kutta-Chebyshev class) and a new method modeled on a predictor-corrector scheme with multiplicative operator splitting. This new explicit method increases stability of the original super-time-stepping whenever the skew-symmetric term is nonzero, which occurs in particular convection-diffusion problems and more generally when the iteration matrix represents a nonlinear operator. The new method is stable for skew symmetric dominated systems where the regular super-time-stepping scheme fails. This method is second order in time (may be increased by Richardson extrapolation) and the spatial order is determined by the user's choice of discretization scheme. We present a comparison between the two super-time-stepping methods to show the speed up available for any non-symmetric system using the nearly symmetric Black-Scholes equation as an example.
Lam, Chun Bun; McHale, Susan M; Crouter, Ann C
2012-11-01
The development and adjustment correlates of parent-child social (parent, child, and others present) and dyadic time (only parent and child present) from age 8 to 18 were examined. Mothers, fathers, and firstborns and secondborns from 188 White families participated in both home and nightly phone interviews. Social time declined across adolescence, but dyadic time with mothers and fathers peaked in early and middle adolescence, respectively. In addition, secondborns' social time declined more slowly than firstborns', and gendered time use patterns were more pronounced in boys and in opposite-sex sibling dyads. Finally, youths who spent more dyadic time with their fathers, on average, had higher general self-worth, and changes in social time with fathers were positively linked to changes in social competence. PMID:22925042
Lam, Chun Bun; McHale, Susan M.; Crouter, Ann C.
2012-01-01
The development and adjustment correlates of parent-child social (parent, child, and others present) and dyadic time (only parent and child present) from age 8 to 18 were examined. Mothers, fathers, and firstborns and secondborns from 188 White families participated in both home and nightly phone interviews. Social time declined across adolescence, but dyadic time with mothers and fathers peaked in early and middle adolescence, respectively. Additionally, secondborns’ social time declined more slowly than firstborns’, and gendered time use patterns were more pronounced in boys and in opposite-sex sibling dyads. Finally, youths who spent more dyadic time with their fathers, on average, had higher general self-worth, and changes in social time with fathers were positively linked to changes in social competence. PMID:22925042
NASA Astrophysics Data System (ADS)
Milenin, Alexey P.; Boullart, Werner; Quli, Farhat; Wen, Youxian
2014-01-01
The effect of chuck temperature adjustment on critical dimension uniformity was studied for the shallow trench isolation etch process by introducing a temperature gradient in a multi-temperature-zone electrostatic chuck. It is shown that the initial radial critical dimension non-uniformity can be improved by a gradual temperature adjustment of the electrostatic chuck and results in the target specification values of uniformity, 3σ ≤ 1.5 nm, for a critical dimension of about 35 nm. Both temperature and RF sensor wafers were used to analyze the impact of an electrostatic chuck temperature gradient on process uniformity by utilizing their unique in situ spatial and temporal mapping capabilities. Thus, the across-wafer thermal sensitivity of the critical dimension was estimated for dense structures: a temperature change of 1 °C leads to a critical dimension change of ˜0.7 nm. The RF sensor wafer was also shown to have a clear response of RF current uniformity to the electrostatic chuck temperature gradient that suggests there could be other phenomena affecting critical dimension uniformity besides temperature itself. The pure temperature contribution to critical dimension change was found to be less than 0.3 nm/°C for the temperature range studied. Finally, a possible mechanism of critical dimension tuning is discussed and an assessment of each separate etch step’s sensitivity to the electrostatic chuck temperature gradient is performed.
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model
Lam, Chun Bun; McHale, Susan M.; Crouter, Ann C.
2014-01-01
This study examined the developmental course and adjustment correlates of time with peers from age 8 to 18. On 7 occasions over 8 years, the two eldest siblings from 201 European American, working- and middle-class families provided questionnaire and/or phone diary data. Multilevel models revealed that girls’ time with mixed/opposite-sex peers increased beginning in middle childhood, but boys’ time increased beginning in early adolescence. For both girls and boys, time with same-sex peers peaked in mid-adolescence. At the within-person level, unsupervised time with mixed/opposite-sex peers longitudinally predicted problem behaviors and depressive symptoms, and supervised time with mixed/opposite-sex peers longitudinally predicted better school performance. Findings highlight the importance of social context in understanding peer involvement and its implications for youth development. PMID:24673293
Automatic Time Stepping with Global Error Control for Groundwater Flow Models
Tang, Guoping
2008-09-01
An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.
NASA Astrophysics Data System (ADS)
Li, Yancheng; Li, Jianchun; Tian, Tongfei; Li, Weihua
2013-09-01
Inspired by its controllable and field-dependent stiffness/damping properties, there has been increasing research and development of magnetorheological elastomer (MRE) for mitigation of unwanted structural or machinery vibrations using MRE isolators or absorbers. Recently, a breakthrough pilot research on the development of a highly innovative prototype adaptive MRE base isolator, with the ability for real-time adaptive control of base isolated structures against various types of earthquakes including near- or far-fault earthquakes, has been reported by the authors. As a further effort to improve the proposed MRE adaptive base isolator and to address some of the shortcomings and challenges, this paper presents systematic investigations on the development of a new highly adjustable MRE base isolator, including experimental testing and characterization of the new isolator. A soft MR elastomer has been designed, fabricated and incorporated in the laminated structure of the new MRE base isolator, which aims to obtain a highly adjustable shear modulus under a medium level of magnetic field. Comprehensive static and dynamic testing was conducted on this new adaptive MRE base isolator to examine its characteristics and evaluate its performance. The experimental results show that this new MRE base isolator can remarkably change the lateral stiffness of the isolator up to 1630% under a medium level of magnetic field. Such highly adjustable MRE base isolator makes the design and implementation of truly real-time adaptive (e.g. semi-active or smart passive) seismic isolation systems become feasible.
Stability analysis and time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D. Warsa, James S. Lowrie, Robert B.
2010-05-20
A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.
An Adaptive Fourier Filter for Relaxing Time Stepping Constraints for Explicit Solvers
Gelb, Anne; Archibald, Richard K
2015-01-01
Filtering is necessary to stabilize piecewise smooth solutions. The resulting diffusion stabilizes the method, but may fail to resolve the solution near discontinuities. Moreover, high order filtering still requires cost prohibitive time stepping. This paper introduces an adaptive filter that controls spurious modes of the solution, but is not unnecessarily diffusive. Consequently we are able to stabilize the solution with larger time steps, but also take advantage of the accuracy of a high order filter.
Hove, Michael J; Balasubramaniam, Ramesh; Keller, Peter E
2014-12-01
Synchronizing movements with a beat requires rapid compensation for timing errors. The phase-correction response (PCR) has been studied extensively in finger tapping by shifting a metronome onset and measuring the adjustment of the following tap time. How the response unfolds during the subsequent tap cycle remains unknown. Using motion capture, we examined finger kinematics during the PCR. Participants tapped with a metronome containing phase perturbations. They tapped in "legato" and "staccato" style at various tempi, which altered the timing of the constituent movement stages (dwell at the surface, extension, and flexion). After a phase perturbation, tapping kinematics changed compared with baseline, and the PCR was distributed differently across movement stages. In staccato tapping, the PCR trajectory changed primarily during finger extension across tempi. In legato tapping, at fast tempi the PCR occurred primarily during extension, whereas at slow tempi most phase correction was already completed during dwell. Across conditions, timing adjustments occurred primarily 100-250 ms into the following tap cycle. The change in movement around 100 ms represents the time to integrate information into an already planned movement and the rapidity suggests a subcortical route. PMID:25151103
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Száz, Dénes; Farkas, Alexandra; Blahó, Miklós; Barta, András; Egri, Ádám; Kretzer, Balázs; Hegedüs, Tibor; Jäger, Zoltán; Horváth, Gábor
2016-01-01
According to an old but still unproven theory, Viking navigators analysed the skylight polarization with dichroic cordierite or tourmaline, or birefringent calcite sunstones in cloudy/foggy weather. Combining these sunstones with their sun-dial, they could determine the position of the occluded sun, from which the geographical northern direction could be guessed. In psychophysical laboratory experiments, we studied the accuracy of the first step of this sky-polarimetric Viking navigation. We measured the adjustment error e of rotatable cordierite, tourmaline and calcite crystals when the task was to determine the direction of polarization of white light as a function of the degree of linear polarization p. From the obtained error functions e(p), the thresholds p* above which the first step can still function (i.e. when the intensity change seen through the rotating analyser can be sensed) were derived. Cordierite is about twice as reliable as tourmaline. Calcite sunstones have smaller adjustment errors if the navigator looks for that orientation of the crystal where the intensity difference between the two spots seen in the crystal is maximal, rather than minimal. For higher p (greater than p crit) of incident light, the adjustment errors of calcite are larger than those of the dichroic cordierite (p crit=20%) and tourmaline (p crit=45%), while for lower p (less than p crit) calcite usually has lower adjustment errors than dichroic sunstones. We showed that real calcite crystals are not as ideal sunstones as it was believed earlier, because they usually contain scratches, impurities and crystal defects which increase considerably their adjustment errors. Thus, cordierite and tourmaline can also be at least as good sunstones as calcite. Using the psychophysical e(p) functions and the patterns of the degree of skylight polarization measured by full-sky imaging polarimetry, we computed how accurately the northern direction can be determined with the use of the Viking
Száz, Dénes; Farkas, Alexandra; Blahó, Miklós; Barta, András; Egri, Ádám; Kretzer, Balázs; Hegedüs, Tibor; Jäger, Zoltán; Horváth, Gábor
2016-01-01
According to an old but still unproven theory, Viking navigators analysed the skylight polarization with dichroic cordierite or tourmaline, or birefringent calcite sunstones in cloudy/foggy weather. Combining these sunstones with their sun-dial, they could determine the position of the occluded sun, from which the geographical northern direction could be guessed. In psychophysical laboratory experiments, we studied the accuracy of the first step of this sky-polarimetric Viking navigation. We measured the adjustment error e of rotatable cordierite, tourmaline and calcite crystals when the task was to determine the direction of polarization of white light as a function of the degree of linear polarization p. From the obtained error functions e(p), the thresholds p* above which the first step can still function (i.e. when the intensity change seen through the rotating analyser can be sensed) were derived. Cordierite is about twice as reliable as tourmaline. Calcite sunstones have smaller adjustment errors if the navigator looks for that orientation of the crystal where the intensity difference between the two spots seen in the crystal is maximal, rather than minimal. For higher p (greater than pcrit) of incident light, the adjustment errors of calcite are larger than those of the dichroic cordierite (pcrit=20%) and tourmaline (pcrit=45%), while for lower p (less than pcrit) calcite usually has lower adjustment errors than dichroic sunstones. We showed that real calcite crystals are not as ideal sunstones as it was believed earlier, because they usually contain scratches, impurities and crystal defects which increase considerably their adjustment errors. Thus, cordierite and tourmaline can also be at least as good sunstones as calcite. Using the psychophysical e(p) functions and the patterns of the degree of skylight polarization measured by full-sky imaging polarimetry, we computed how accurately the northern direction can be determined with the use of the Viking sun
Modeling solute transport in distribution networks with variable demand and time step sizes.
Peyton, Chad E.; Bilisoly, Roger Lee; Buchberger, Steven G.; McKenna, Sean Andrew; Yarrington, Lane
2004-06-01
The effect of variable demands at short time scales on the transport of a solute through a water distribution network has not previously been studied. We simulate flow and transport in a small water distribution network using EPANET to explore the effect of variable demand on solute transport across a range of hydraulic time step scales from 1 minute to 2 hours. We show that variable demands at short time scales can have the following effects: smoothing of a pulse of tracer injected into a distribution network and increasing the variability of both the transport pathway and transport timing through the network. Variable demands are simulated for these different time step sizes using a previously developed Poisson rectangular pulse (PRP) demand generator that considers demand at a node to be a combination of exponentially distributed arrival times with log-normally distributed intensities and durations. Solute is introduced at a tank and at three different network nodes and concentrations are modeled through the system using the Lagrangian transport scheme within EPANET. The transport equations within EPANET assume perfect mixing of the solute within a parcel of water and therefore physical dispersion cannot occur. However, variation in demands along the solute transport path contribute to both removal and distortion of the injected pulse. The model performance measures examined are the distribution of the Reynolds number, the variation in the center of mass of the solute across time, and the transport path and timing of the solute through the network. Variation in all three performance measures is greatest at the shortest time step sizes. As the scale of the time step increases, the variability in these performance measures decreases. The largest time steps produce results that are inconsistent with the results produced by the smaller time steps.
Ford, Marc C.; Alexandrova, Olga; Cossell, Lee; Stange-Marten, Annette; Sinclair, James; Kopp-Scheinpflug, Conny; Pecka, Michael; Attwell, David; Grothe, Benedikt
2015-01-01
Action potential timing is fundamental to information processing; however, its determinants are not fully understood. Here we report unexpected structural specializations in the Ranvier nodes and internodes of auditory brainstem axons involved in sound localization. Myelination properties deviated significantly from the traditionally assumed structure. Axons responding best to low-frequency sounds had a larger diameter than high-frequency axons but, surprisingly, shorter internodes. Simulations predicted that this geometry helps to adjust the conduction velocity and timing of action potentials within the circuit. Electrophysiological recordings in vitro and in vivo confirmed higher conduction velocities in low-frequency axons. Moreover, internode length decreased and Ranvier node diameter increased progressively along the distal axon segments, which simulations show was essential to ensure precisely timed depolarization of the giant calyx of Held presynaptic terminal. Thus, individual anatomical parameters of myelinated axons can be tuned to optimize pathways involved in temporal processing. PMID:26305015
Ford, Marc C; Alexandrova, Olga; Cossell, Lee; Stange-Marten, Annette; Sinclair, James; Kopp-Scheinpflug, Conny; Pecka, Michael; Attwell, David; Grothe, Benedikt
2015-01-01
Action potential timing is fundamental to information processing; however, its determinants are not fully understood. Here we report unexpected structural specializations in the Ranvier nodes and internodes of auditory brainstem axons involved in sound localization. Myelination properties deviated significantly from the traditionally assumed structure. Axons responding best to low-frequency sounds had a larger diameter than high-frequency axons but, surprisingly, shorter internodes. Simulations predicted that this geometry helps to adjust the conduction velocity and timing of action potentials within the circuit. Electrophysiological recordings in vitro and in vivo confirmed higher conduction velocities in low-frequency axons. Moreover, internode length decreased and Ranvier node diameter increased progressively along the distal axon segments, which simulations show was essential to ensure precisely timed depolarization of the giant calyx of Held presynaptic terminal. Thus, individual anatomical parameters of myelinated axons can be tuned to optimize pathways involved in temporal processing. PMID:26305015
De Roia, Gabriela; Pogliaghi, Silvia; Adami, Alessandra; Papadopoulou, Christina; Capelli, Carlo
2012-05-15
Aging is associated with a functional decline of the oxidative metabolism due to progressive limitations of both O(2) delivery and utilization. Priming exercise (PE) increases the speed of adjustment of oxidative metabolism during successive moderate-intensity transitions. We tested the hypothesis that such improvement is due to a better matching of O(2) delivery to utilization within the working muscles. In 21 healthy older adults (65.7 ± 5 yr), we measured contemporaneously noninvasive indexes of the overall speed of adjustment of the oxidative metabolism (i.e., pulmonary Vo(2) kinetics), of the bulk O(2) delivery (i.e., cardiac output), and of the rate of muscle deoxygenation (i.e., deoxygenated hemoglobin, HHb) during moderate-intensity step transitions, either with (ModB) or without (ModA) prior PE. The local matching of O(2) delivery to utilization was evaluated by the ΔHHb/ΔVo(2) ratio index. The overall speed of adjustment of the Vo(2) kinetics was significantly increased in ModB compared with ModA (P < 0.05). On the contrary, the kinetics of cardiac output was unaffected by PE. At the muscle level, ModB was associated with a significant reduction of the "overshoot" in the ΔHHb/ΔVo(2) ratio compared with ModA (P < 0.05), suggesting an improved O(2) delivery. Our data are compatible with the hypothesis that, in older adults, PE, prior to moderate-intensity exercise, beneficially affects the speed of adjustment of oxidative metabolism due to an acute improvement of the local matching of O(2) delivery to utilization. PMID:22422668
Halsey, Lewis G; Watkins, David A R; Duggan, Brendan M
2012-01-01
Stairway climbing provides a ubiquitous and inconspicuous method of burning calories. While typically two strategies are employed for climbing stairs, climbing one stair step per stride or two steps per stride, research to date has not clarified if there are any differences in energy expenditure between them. Fourteen participants took part in two stair climbing trials whereby measures of heart rate were used to estimate energy expenditure during stairway ascent at speeds chosen by the participants. The relationship between rate of oxygen consumption ([Formula: see text]) and heart rate was calibrated for each participant using an inclined treadmill. The trials involved climbing up and down a 14.05 m high stairway, either ascending one step per stride or ascending two stair steps per stride. Single-step climbing used 8.5±0.1 kcal min(-1), whereas double step climbing used 9.2±0.1 kcal min(-1). These estimations are similar to equivalent measures in all previous studies, which have all directly measured [Formula: see text] The present study findings indicate that (1) treadmill-calibrated heart rate recordings can be used as a valid alternative to respirometry to ascertain rate of energy expenditure during stair climbing; (2) two step climbing invokes a higher rate of energy expenditure; however, one step climbing is energetically more expensive in total over the entirety of a stairway. Therefore to expend the maximum number of calories when climbing a set of stairs the single-step strategy is better. PMID:23251455
Halsey, Lewis G.; Watkins, David A. R.; Duggan, Brendan M.
2012-01-01
Stairway climbing provides a ubiquitous and inconspicuous method of burning calories. While typically two strategies are employed for climbing stairs, climbing one stair step per stride or two steps per stride, research to date has not clarified if there are any differences in energy expenditure between them. Fourteen participants took part in two stair climbing trials whereby measures of heart rate were used to estimate energy expenditure during stairway ascent at speeds chosen by the participants. The relationship between rate of oxygen consumption () and heart rate was calibrated for each participant using an inclined treadmill. The trials involved climbing up and down a 14.05 m high stairway, either ascending one step per stride or ascending two stair steps per stride. Single-step climbing used 8.5±0.1 kcal min−1, whereas double step climbing used 9.2±0.1 kcal min−1. These estimations are similar to equivalent measures in all previous studies, which have all directly measured The present study findings indicate that (1) treadmill-calibrated heart rate recordings can be used as a valid alternative to respirometry to ascertain rate of energy expenditure during stair climbing; (2) two step climbing invokes a higher rate of energy expenditure; however, one step climbing is energetically more expensive in total over the entirety of a stairway. Therefore to expend the maximum number of calories when climbing a set of stairs the single-step strategy is better. PMID:23251455
Modified Chebyshev pseudospectral method with O(N exp -1) time step restriction
NASA Technical Reports Server (NTRS)
Kosloff, Dan; Tal-Ezer, Hillel
1989-01-01
The extreme eigenvalues of the Chebyshev pseudospectral differentiation operator are O(N exp 2) where N is the number of grid points. As a result of this, the allowable time step in an explicit time marching algorithm is O(N exp -2) which, in many cases, is much below the time step dictated by the physics of the partial differential equation. A new set of interpolating points is introduced such that the eigenvalues of the differentiation operator are O(N) and the allowable time step is O(N exp -1). The properties of the new algorithm are similar to those of the Fourier method. The new algorithm also provides a highly accurate solution for non-periodic boundary value problems.
26 CFR 1.754-1 - Time and manner of making election to adjust basis of partnership property.
Code of Federal Regulations, 2010 CFR
2010-04-01
... basis of partnership property. 1.754-1 Section 1.754-1 Internal Revenue INTERNAL REVENUE SERVICE..., Subchapter K, Chapter 1 of the Code § 1.754-1 Time and manner of making election to adjust basis of... and method of making election. (1) An election under section 754 and this section to adjust the...
ERIC Educational Resources Information Center
Martinez, Charles R., Jr.; McClure, Heather H.; Eddy, J. Mark; Wilson, D. Molloy
2011-01-01
Little is known about contributors to positive social, behavioral, and emotional adjustment among foreign-born youth at different stages of adapting to life in the United States. Using baseline data from the Adolescent Latino Acculturation Study (N = 217), this article examines the effects of time in residency on parent adjustment, family stress,…
Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul
2013-01-01
Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format. PMID:23685876
2015-01-01
When simulating molecular systems using deterministic equations of motion (e.g., Newtonian dynamics), such equations are generally numerically integrated according to a well-developed set of algorithms that share commonly agreed-upon desirable properties. However, for stochastic equations of motion (e.g., Langevin dynamics), there is still broad disagreement over which integration algorithms are most appropriate. While multiple desiderata have been proposed throughout the literature, consensus on which criteria are important is absent, and no published integration scheme satisfies all desiderata simultaneously. Additional nontrivial complications stem from simulating systems driven out of equilibrium using existing stochastic integration schemes in conjunction with recently developed nonequilibrium fluctuation theorems. Here, we examine a family of discrete time integration schemes for Langevin dynamics, assessing how each member satisfies a variety of desiderata that have been enumerated in prior efforts to construct suitable Langevin integrators. We show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting (related to the velocity Verlet discretization) that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts. PMID:24555448
Toggweiler, Matthias; Adelmann, Andreas; Arbenz, Peter; Yang, Jianjun
2014-09-15
We show that adaptive time stepping in particle accelerator simulation is an enhancement for certain problems. The new algorithm has been implemented in the OPAL (Object Oriented Parallel Accelerator Library) framework. The idea is to adjust the frequency of costly self-field calculations, which are needed to model Coulomb interaction (space charge) effects. In analogy to a Kepler orbit simulation that requires a higher time step resolution at the close encounter, we propose to choose the time step based on the magnitude of the space charge forces. Inspired by geometric integration techniques, our algorithm chooses the time step proportional to a function of the current phase space state instead of calculating a local error estimate like a conventional adaptive procedure. Building on recent work, a more profound argument is given on how exactly the time step should be chosen. An intermediate algorithm, initially built to allow a clearer analysis by introducing separate time steps for external field and self-field integration, turned out to be useful by its own, for a large class of problems.
Multi time-step wavefront reconstruction for tomographic adaptive-optics systems.
Ono, Yoshito H; Akiyama, Masayuki; Oya, Shin; Lardiére, Olivier; Andersen, David R; Correia, Carlos; Jackson, Kate; Bradley, Colin
2016-04-01
In tomographic adaptive-optics (AO) systems, errors due to tomographic wavefront reconstruction limit the performance and angular size of the scientific field of view (FoV), where AO correction is effective. We propose a multi time-step tomographic wavefront reconstruction method to reduce the tomographic error by using measurements from both the current and previous time steps simultaneously. We further outline the method to feed the reconstructor with both wind speed and direction of each turbulence layer. An end-to-end numerical simulation, assuming a multi-object AO (MOAO) system on a 30 m aperture telescope, shows that the multi time-step reconstruction increases the Strehl ratio (SR) over a scientific FoV of 10 arc min in diameter by a factor of 1.5-1.8 when compared to the classical tomographic reconstructor, depending on the guide star asterism and with perfect knowledge of wind speeds and directions. We also evaluate the multi time-step reconstruction method and the wind estimation method on the RAVEN demonstrator under laboratory setting conditions. The wind speeds and directions at multiple atmospheric layers are measured successfully in the laboratory experiment by our wind estimation method with errors below 2 ms^{-1}. With these wind estimates, the multi time-step reconstructor increases the SR value by a factor of 1.2-1.5, which is consistent with a prediction from the end-to-end numerical simulation. PMID:27140785
NASA Astrophysics Data System (ADS)
Chen, Zhang; Liang, Bin; Zhang, Tao
2016-05-01
When teleoperations are implemented in the constrained environment, the lack of environment information would lead to contacts and undesired excessive contact forces, which are more evident with the existence of time delays. In this paper, a hybrid compliant bilateral controller is proposed to deal with this problem. The controller adopts a self-adjusting selecting scheme to divide the subspaces online. The master and slave manipulators are synchronized in the position subspace through an adaptive bilateral control scheme. At the same time, the slave manipulator is controlled by a local sliding mode impedance controller in order to achieve the desired compliant motion when contacting with the environment. Theoretical analysis proves the stability of the hybrid bilateral controller and concludes the transient performance of the teleoperators. Simulations are carried out to verify the effectiveness of the proposed approach. The results show that the control goals are all achieved.
Ejupi, Andreas; Brodie, Matthew; Gschwind, Yves J; Schoene, Daniel; Lord, Stephen; Delbaere, Kim
2014-01-01
Accidental falls remain an important problem in older people. Stepping is a common task to avoid a fall and requires good interplay between sensory functions, central processing and motor execution. Increased choice stepping reaction time has been associated with recurrent falls in older people. The aim of this study was to examine if a sensor-based Exergame Choice Stepping Reaction Time test can successfully discriminate older fallers from non-fallers. The stepping test was conducted in a cohort of 104 community-dwelling older people (mean age: 80.7 ± 7.0 years). Participants were asked to step laterally as quickly as possible after a light stimulus appeared on a TV screen. Spatial and temporal measurements of the lower and upper body were derived from a low-cost and portable 3D-depth sensor (i.e. Microsoft Kinect) and 3D-accelerometer. Fallers had a slower stepping reaction time (970 ± 228 ms vs. 858 ± 123 ms, P = 0.001) and a slower reaction of their upper body (719 ± 289 ms vs. 631 ± 166 ms, P = 0.052) compared to non-fallers. It took fallers significantly longer than non-fallers to recover their balance after initiating the step (2147 ± 800 ms vs. 1841 ± 591 ms, P = 0.029). This study demonstrated that a sensor-based, low-cost and easy to administer stepping test, with the potential to be used in clinical practice or regular unsupervised home assessments, was able to identify significant differences between performances by fallers and non-fallers. PMID:25571596
Time-step limits for a Monte Carlo Compton-scattering method
Densmore, Jeffery D; Warsa, James S; Lowrie, Robert B
2008-01-01
Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the
Exponential time-differencing with embedded Runge–Kutta adaptive step control
Whalen, P.; Brio, M.; Moloney, J.V.
2015-01-01
We have presented the first embedded Runge–Kutta exponential time-differencing (RKETD) methods of fourth order with third order embedding and fifth order with third order embedding for non-Rosenbrock type nonlinear systems. A procedure for constructing RKETD methods that accounts for both order conditions and stability is outlined. In our stability analysis, the fast time scale is represented by a full linear operator in contrast to particular scalar cases considered before. An effective time-stepping strategy based on reducing both ETD function evaluations and rejected steps is described. Comparisons of performance with adaptive-stepping integrating factor (IF) are carried out on a set of canonical partial differential equations: the shock-fronts of Burgers equation, interacting KdV solitons, KS controlled chaos, and critical collapse of two-dimensional NLS.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
Muscle contraction: the step-size distance and the impulse-time per ATP.
Worthington, C R; Elliott, G F
1996-02-01
We derive the step-size distance, and the impulse time per ATP split, from a consideration of Hill's energy rate equation coupled with the enthalpy available per ATP split. This definition of step-size distance is model-independent, and is calculated to have a maximum of 17 A at no load and to reduce to zero at isometric tension, since it will depend on the velocity of shortening. We revisit a derivation of Hill's force-velocity equation depending on impulsive forces working against frictional forces and show that this gives a physical meaning to Hill's constants a and b. This is particularly elegant for Hill's constant b, which is directly related to the impulse time; the value of this impulse time is 1/2 ms. The question that muscle contraction may involve overlapping interactions is considered. However, we find that the step-size distance is not dependent on the possibility of overlapping interactions. PMID:8852761
Effects of Timing of Adversity on Adolescent and Young Adult Adjustment
ERIC Educational Resources Information Center
Kiff, Cara J.; Cortes, Rebecca C.; Lengua, Liliana J.; Kosterman, Rick; Hawkins, J. David; Mason, W. Alex
2012-01-01
Exposure to adversity during childhood and adolescence predicts adjustment across development. Furthermore, adolescent adjustment problems persist into young adulthood. This study examined relations of contextual adversity with concurrent adolescent adjustment and prospective mental health and health outcomes in young adulthood. A longitudinal…
NASA Astrophysics Data System (ADS)
Wang, Yue
A new variable grid-size and time-step finite-difference (FD) method is developed and applied to three different geophysical problems: simulation of tube waves in boreholes, three-dimensional (3-D) ground-motion simulation in sedimentary basin models, and reverse-time migration of multicomponent data. Unlike the conventional FD method, which uses a fixed grid-size and time-step for the entire model region, spatially variable grid-sizes and time-steps are used to achieve the optimal computational efficiency. For tube wave simulations, a fine grid-spacing is used for simulation inside the borehole region, while a coarse grid is used in the exterior region. While the stability condition requires a very fine time step for the fine grid, a variable time-step method provides coarse time steps for simulation in the coarse grid. Variable grid-size and time-step changes are used to achieve both accuracy and efficiency in the simulations. Numerical tests are performed for the Bayou Choctaw salt-flank model with different borehole models. The results show the important borehole effects on the seismic wavefield for a realistic source bandwidth. The combination of variable grid-size and time-step methods reduces computational costs by several orders of magnitude for the borehole models. Viscoelastic 3-D simulations are performed for a three-layer Salt Lake basin model. The near-surface unconsolidated layer is modeled with a fine grid, and the deep part of the model is modeled by a coarse grid. Simulation results show that the 3-D basin features and the shallow layer significantly affect the amplitude and duration time of the ground motion. In the elastic case, the approximation by 2-D modeling is insufficient to simulate the 3-D ground motion response. A basin model without a shallow low-velocity layer underestimates the ground motion duration and cumulative kinetic energy by 50% or more. The simulation of a Bingham Mine blast suggests that a lower S-velocity should be used to
NASA Technical Reports Server (NTRS)
Jameson, A.; Schmidt, Wolfgang; Turkel, Eli
1981-01-01
A new combination of a finite volume discretization in conjunction with carefully designed dissipative terms of third order, and a Runge Kutta time stepping scheme, is shown to yield an effective method for solving the Euler equations in arbitrary geometric domains. The method has been used to determine the steady transonic flow past an airfoil using an O mesh. Convergence to a steady state is accelerated by the use of a variable time step determined by the local Courant member, and the introduction of a forcing term proportional to the difference between the local total enthalpy and its free stream value.
NASA Astrophysics Data System (ADS)
Karimi, S.; Nakshatrala, K. B.
2014-12-01
Advection-Diffusion-Reaction (ADR) equations play a crucial role in simulating numerous geo- physical phenomena. It is well-known that the solution to these equations exhibit disparate spatial and temporal scales. These mathematical scales occur due to relative dominance of either advec- tion, diffusion, or reaction processes. Hence, in a careful simulation, one has to choose appropriate time-integrators, time-steps, and numerical formulations for spatial discretization. Multi-time-step coupling methods allow specific choice of integration methods (either temporal or spatial) in dif- ferent regions of the spatial domain. In recent years, most of the attempts to design monolithic multi-time-step frameworks favored second-order transient systems in structural dynamics. In this presentation, we will introduce monolithic multi-time-step computational frameworks for ADR equations. These methods are based on the theory of differential/algebraic equations. We shall also provide an overview of results from stability analysis, study of drift from compatibility con- straints, and analysis of influence of perturbations. Several benchmark problems will be utilized to demonstrate the theoretical findings and features of the proposed frameworks. Finally, application of the proposed methods to fast bimolecular reactive systems will be shown.
Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas
Cohen, B I; Dimits, A; Friedman, A; Caflisch, R
2009-10-29
The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
Accelerating spectral-element simulations of seismic wave propagation using local time stepping
NASA Astrophysics Data System (ADS)
Peter, D. B.; Rietmann, M.; Galvez, P.; Nissen-Meyer, T.; Grote, M.; Schenk, O.
2013-12-01
Seismic tomography using full-waveform inversion requires accurate simulations of seismic wave propagation in complex 3D media. However, finite element meshing in complex media often leads to areas of local refinement, generating small elements that accurately capture e.g. strong topography and/or low-velocity sediment basins. For explicit time schemes, this dramatically reduces the global time-step for wave-propagation problems due to numerical stability conditions, ultimately making seismic inversions prohibitively expensive. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. Numerical simulations are thus liberated of global time-step constraints potentially speeding up simulation runtimes significantly. We present here a new, efficient multi-level LTS-Newmark scheme for general use with spectral-element methods (SEM) with applications in seismic wave propagation. We fit the implementation of our scheme onto the package SPECFEM3D_Cartesian, which is a widely used community code, simulating seismic and acoustic wave propagation in earth-science applications. Our new LTS scheme extends the 2nd-order accurate Newmark time-stepping scheme, and leads to an efficient implementation, producing real-world speedup of multi-resolution seismic applications. Furthermore, we generalize the method to utilize many refinement levels with a design specifically for continuous finite elements. We demonstrate performance speedup using a state-of-the-art dynamic earthquake rupture model for the Tohoku-Oki event, which is currently limited by small elements along the rupture fault. Utilizing our new algorithmic LTS implementation together with advances in exploiting graphic processing units (GPUs), numerical seismic wave propagation simulations in complex media will dramatically reduce computation times, empowering high
Simulating diffusion processes in discontinuous media: A numerical scheme with constant time steps
Lejay, Antoine; Pichot, Geraldine
2012-08-30
In this article, we propose new Monte Carlo techniques for moving a diffusive particle in a discontinuous media. In this framework, we characterize the stochastic process that governs the positions of the particle. The key tool is the reduction of the process to a Skew Brownian motion (SBM). In a zone where the coefficients are locally constant on each side of the discontinuity, the new position of the particle after a constant time step is sampled from the exact distribution of the SBM process at the considered time. To do so, we propose two different but equivalent algorithms: a two-steps simulation with a stop at the discontinuity and a one-step direct simulation of the SBM dynamic. Some benchmark tests illustrate their effectiveness.
NASA Astrophysics Data System (ADS)
Ramadan, Omar
2014-12-01
Systematic split-step finite difference time domain (SS-FDTD) formulations, based on the general Lie-Trotter-Suzuki product formula, are presented for solving the time-dependent Maxwell equations in double-dispersive electromagnetic materials. The proposed formulations provide a unified tool for constructing a family of unconditionally stable algorithms such as the first order split-step FDTD (SS1-FDTD), the second order split-step FDTD (SS2-FDTD), and the second order alternating direction implicit FDTD (ADI-FDTD) schemes. The theoretical stability of the formulations is included and it has been demonstrated that the formulations are unconditionally stable by construction. Furthermore, the dispersion relation of the formulations is derived and it has been found that the proposed formulations are best suited for those applications where a high space resolution is needed. Two-dimensional (2-D) and 3-D numerical examples are included and it has been observed that the SS1-FDTD scheme is computationally more efficient than the ADI-FDTD counterpart, while maintaining approximately the same numerical accuracy. Moreover, the SS2-FDTD scheme allows using larger time step than the SS1-FDTD or ADI-FDTD and therefore necessitates less CPU time, while giving approximately the same numerical accuracy.
Suggestions for CAP-TSD mesh and time-step input parameters
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1991-01-01
Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.
Adaptive time stepping algorithm for Lagrangian transport models: Theory and idealised test cases
NASA Astrophysics Data System (ADS)
Shah, Syed Hyder Ali Muttaqi; Heemink, Arnold Willem; Gräwe, Ulf; Deleersnijder, Eric
2013-08-01
Random walk simulations have an excellent potential in marine and oceanic modelling. This is essentially due to their relative simplicity and their ability to represent advective transport without being plagued by the deficiencies of the Eulerian methods. The physical and mathematical foundations of random walk modelling of turbulent diffusion have become solid over the years. Random walk models rest on the theory of stochastic differential equations. Unfortunately, the latter and the related numerical aspects have not attracted much attention in the oceanic modelling community. The main goal of this paper is to help bridge the gap by developing an efficient adaptive time stepping algorithm for random walk models. Its performance is examined on two idealised test cases of turbulent dispersion; (i) pycnocline crossing and (ii) non-flat isopycnal diffusion, which are inspired by shallow-sea dynamics and large-scale ocean transport processes, respectively. The numerical results of the adaptive time stepping algorithm are compared with the fixed-time increment Milstein scheme, showing that the adaptive time stepping algorithm for Lagrangian random walk models is more efficient than its fixed step-size counterpart without any loss in accuracy.
Dependence of Hurricane intensity and structures on vertical resolution and time-step size
NASA Astrophysics Data System (ADS)
Zhang, Da-Lin; Wang, Xiaoxue
2003-09-01
In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.
Cavanagh, James F
2015-04-15
Recent work has suggested that reward prediction errors elicit a positive voltage deflection in the scalp-recorded electroencephalogram (EEG); an event sometimes termed a reward positivity. However, a strong test of this proposed relationship remains to be defined. Other important questions remain unaddressed: such as the role of the reward positivity in predicting future behavioral adjustments that maximize reward. To answer these questions, a three-armed bandit task was used to investigate the role of positive prediction errors during trial-by-trial exploration and task-set based exploitation. The feedback-locked reward positivity was characterized by delta band activities, and these related EEG features scaled with the degree of a computationally derived positive prediction error. However, these phenomena were also dissociated: the computational model predicted exploitative action selection and related response time speeding whereas the feedback-locked EEG features did not. Compellingly, delta band dynamics time-locked to the subsequent bandit (the P3) successfully predicted these behaviors. These bandit-locked findings included an enhanced parietal to motor cortex delta phase lag that correlated with the degree of response time speeding, suggesting a mechanistic role for delta band activities in motivating action selection. This dissociation in feedback vs. bandit locked EEG signals is interpreted as a differentiation in hierarchically distinct types of prediction error, yielding novel predictions about these dissociable delta band phenomena during reinforcement learning and decision making. PMID:25676913
Causal-Path Local Time-Stepping in the discontinuous Galerkin method for Maxwell's equations
NASA Astrophysics Data System (ADS)
Angulo, L. D.; Alvarez, J.; Teixeira, F. L.; Pantoja, M. F.; Garcia, S. G.
2014-01-01
We introduce a novel local time-stepping technique for marching-in-time algorithms. The technique is denoted as Causal-Path Local Time-Stepping (CPLTS) and it is applied for two time integration techniques: fourth-order low-storage explicit Runge-Kutta (LSERK4) and second-order Leap-Frog (LF2). The CPLTS method is applied to evolve Maxwell's curl equations using a Discontinuous Galerkin (DG) scheme for the spatial discretization. Numerical results for LF2 and LSERK4 are compared with analytical solutions and the Montseny's LF2 technique. The results show that the CPLTS technique improves the dispersive and dissipative properties of LF2-LTS scheme.
Development of a variable time-step transient NEW code: SPANDEX
Aviles, B.N. )
1993-01-01
This paper describes a three-dimensional, variable time-step transient multigroup diffusion theory code, SPANDEX (space-time nodal expansion method). SPANDEX is based on the static nodal expansion method (NEM) code, NODEX (Ref. 1), and employs a nonlinear algorithm and a fifth-order expansion of the transverse-integrated fluxes. The time integration scheme in SPANDEX is a fourth-order implicit generalized Runge-Kutta method (GRK) with on-line error control and variable time-step selection. This Runge-Kutta method has been applied previously to point kinetics and one-dimensional finite difference transient analysis. This paper describes the application of the Runge-Kutta method to three-dimensional reactor transient analysis in a multigroup NEM code.
Promoting rest using a quiet time innovation in an adult neuroscience step down unit.
Bergner, Tara
2014-01-01
Sleep and rest are fundamental for the restoration of energy needed to recuperate from illness, trauma and surgery. At present hospitals are too noisy to promote rest for patients. A literature search produced research that described how quiet time interventions addressing noise levels have met with positive patient and staff satisfaction, as well as creating a more peaceful and healing environment. In this paper, a description of the importance of quiet time and how a small butfeasible innovation was carried out in an adult neuroscience step down unit in a large tertiary health care facility in Canada is provided. Anecdotal evidence from patients, families, and staff suggests that quiet time may have positive effects for patients, their families, and the adult neuroscience step down unit staff Future research examining the effect of quiet time on patient, family and staff satisfaction and patient healing is necessary. PMID:25638912
NASA Astrophysics Data System (ADS)
Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny
2015-03-01
We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3-4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations.
Zhang, Peng; Zhang, Na; Deng, Yuefan; Bluestein, Danny
2015-01-01
We developed a multiple time-stepping (MTS) algorithm for multiscale modeling of the dynamics of platelets flowing in viscous blood plasma. This MTS algorithm improves considerably the computational efficiency without significant loss of accuracy. This study of the dynamic properties of flowing platelets employs a combination of the dissipative particle dynamics (DPD) and the coarse-grained molecular dynamics (CGMD) methods to describe the dynamic microstructures of deformable platelets in response to extracellular flow-induced stresses. The disparate spatial scales between the two methods are handled by a hybrid force field interface. However, the disparity in temporal scales between the DPD and CGMD that requires time stepping at microseconds and nanoseconds respectively, represents a computational challenge that may become prohibitive. Classical MTS algorithms manage to improve computing efficiency by multi-stepping within DPD or CGMD for up to one order of magnitude of scale differential. In order to handle 3–4 orders of magnitude disparity in the temporal scales between DPD and CGMD, we introduce a new MTS scheme hybridizing DPD and CGMD by utilizing four different time stepping sizes. We advance the fluid system at the largest time step, the fluid-platelet interface at a middle timestep size, and the nonbonded and bonded potentials of the platelet structural system at two smallest timestep sizes. Additionally, we introduce parameters to study the relationship of accuracy versus computational complexities. The numerical experiments demonstrated 3000x reduction in computing time over standard MTS methods for solving the multiscale model. This MTS algorithm establishes a computationally feasible approach for solving a particle-based system at multiple scales for performing efficient multiscale simulations. PMID:25641983
An implicit time-stepping scheme for rigid body dynamics with Coulomb friction
STEWART,DAVID; TRINKLE,JEFFREY C.
2000-02-15
In this paper a new time-stepping method for simulating systems of rigid bodies is given. Unlike methods which take an instantaneous point of view, the method is based on impulse-momentum equations, and so does not need to explicitly resolve impulsive forces. On the other hand, the method is distinct from previous impulsive methods in that it does not require explicit collision checking and it can handle simultaneous impacts. Numerical results are given for one planar and one three-dimensional example, which demonstrate the practicality of the method, and its convergence as the step size becomes small.
Real-time video fusion based on multistage hashing and hybrid transformation with depth adjustment
NASA Astrophysics Data System (ADS)
Zhao, Hongjian; Xia, Shixiong; Yao, Rui; Niu, Qiang; Zhou, Yong
2015-11-01
Concatenating multicamera videos with differing centers of projection into a single panoramic video is a critical technology of many important applications. We propose a real-time video fusion approach to create wide field-of-view video. To provide a fast and accurate video registration method, we propose multistage hashing to find matched feature-point pairs from coarse to fine. In the first stage of multistage hashing, a short compact binary code is learned from all feature points, and then we calculate the Hamming distance between each two points to find the candidate-matched points. In the second stage, a long binary code is obtained by remapping the candidate points for fine matching. To tackle the distortion and scene depth variation of multiview frames in videos, we build hybrid transformation with depth adjustment. The depth compensation between two adjacent frames extends into multiple frames in an iterative model for successive video frames. We conduct several experiments with different dynamic scenes and camera numbers to verify the performance of the proposed real-time video fusion approach.
Rapid Adjustment of Circadian Clocks to Simulated Travel to Time Zones across the Globe.
Harrison, Elizabeth M; Gorman, Michael R
2015-12-01
Daily rhythms in mammalian physiology and behavior are generated by a central pacemaker located in the hypothalamic suprachiasmatic nuclei (SCN), the timing of which is set by light from the environment. When the ambient light-dark cycle is shifted, as occurs with travel across time zones, the SCN and its output rhythms must reset or re-entrain their phases to match the new schedule-a sluggish process requiring about 1 day per hour shift. Using a global assay of circadian resetting to 6 equidistant time-zone meridians, we document this characteristically slow and distance-dependent resetting of Syrian hamsters under typical laboratory lighting conditions, which mimic summer day lengths. The circadian pacemaker, however, is additionally entrainable with respect to its waveform (i.e., the shape of the 24-h oscillation) allowing for tracking of seasonally varying day lengths. We here demonstrate an unprecedented, light exposure-based acceleration in phase resetting following 2 manipulations of circadian waveform. Adaptation of circadian waveforms to long winter nights (8 h light, 16 h dark) doubled the shift response in the first 3 days after the shift. Moreover, a bifurcated waveform induced by exposure to a novel 24-h light-dark-light-dark cycle permitted nearly instant resetting to phase shifts from 4 to 12 h in magnitude, representing a 71% reduction in the mismatch between the activity rhythm and the new photocycle. Thus, a marked enhancement of phase shifting can be induced via nonpharmacological, noninvasive manipulation of the circadian pacemaker waveform in a model species for mammalian circadian rhythmicity. Given the evidence of conserved flexibility in the human pacemaker waveform, these findings raise the promise of flexible resetting applicable to circadian disruption in shift workers, frequent time-zone travelers, and any individual forced to adjust to challenging schedules. PMID:26275871
Inertial stochastic dynamics. I. Long-time-step methods for Langevin dynamics
NASA Astrophysics Data System (ADS)
Beard, Daniel A.; Schlick, Tamar
2000-05-01
Two algorithms are presented for integrating the Langevin dynamics equation with long numerical time steps while treating the mass terms as finite. The development of these methods is motivated by the need for accurate methods for simulating slow processes in polymer systems such as two-site intermolecular distances in supercoiled DNA, which evolve over the time scale of milliseconds. Our new approaches refine the common Brownian dynamics (BD) scheme, which approximates the Langevin equation in the highly damped diffusive limit. Our LTID ("long-time-step inertial dynamics") method is based on an eigenmode decomposition of the friction tensor. The less costly integrator IBD ("inertial Brownian dynamics") modifies the usual BD algorithm by the addition of a mass-dependent correction term. To validate the methods, we evaluate the accuracy of LTID and IBD and compare their behavior to that of BD for the simple example of a harmonic oscillator. We find that the LTID method produces the expected correlation structure for Langevin dynamics regardless of the level of damping. In fact, LTID is the only consistent method among the three, with error vanishing as the time step approaches zero. In contrast, BD is accurate only for highly overdamped systems. For cases of moderate overdamping, and for the appropriate choice of time step, IBD is significantly more accurate than BD. IBD is also less computationally expensive than LTID (though both are the same order of complexity as BD), and thus can be applied to simulate systems of size and time scale ranges previously accessible to only the usual BD approach. Such simulations are discussed in our companion paper, for long DNA molecules modeled as wormlike chains.
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
NASA Astrophysics Data System (ADS)
Wang, Gaili; Liu, Liping; Ding, Yuanyuan
2012-05-01
The errors in radar quantitative precipitation estimations consist not only of systematic biases caused by random noises but also spatially nonuniform biases in radar rainfall at individual rain-gauge stations. In this study, a real-time adjustment to the radar reflectivity-rainfall rates ( Z-R) relationship scheme and the gauge-corrected, radar-based, estimation scheme with inverse distance weighting interpolation was developed. Based on the characteristics of the two schemes, the two-step correction technique of radar quantitative precipitation estimation is proposed. To minimize the errors between radar quantitative precipitation estimations and rain gauge observations, a real-time adjustment to the Z-R relationship scheme is used to remove systematic bias on the time-domain. The gauge-corrected, radar-based, estimation scheme is then used to eliminate non-uniform errors in space. Based on radar data and rain gauge observations near the Huaihe River, the two-step correction technique was evaluated using two heavy-precipitation events. The results show that the proposed scheme improved not only in the underestimation of rainfall but also reduced the root-mean-square error and the mean relative error of radar-rain gauge pairs.
The Semi-implicit Time-stepping Algorithm in MH4D
NASA Astrophysics Data System (ADS)
Vadlamani, Srinath; Shumlak, Uri; Marklin, George; Meier, Eric; Lionello, Roberto
2006-10-01
The Plasma Science and Innovation Center (PSI Center) at the University of Washington is developing MHD codes to accurately model Emerging Concept (EC) devices. Examination of the semi-implicit time stepping algorithm implemented in the tetrahedral mesh MHD simulation code, MH4D, is presented. The time steps for standard explicit methods, which are constrained by the Courant-Friedrichs-Lewy (CFL) condition, are typically small for simulations of EC experiments due to the large Alfven speed. The CFL constraint is more severe with a tetrahedral mesh because of the irregular cell geometry. The semi-implicit algorithm [1] removes the fast waves constraint, thus allowing for larger time steps. We will present the implementation method of this algorithm, and numerical results for test problems in simple geometry. Also, we will present the effectiveness in simulations of complex geometry, similar to the ZaP [2] experiment at the University of Washington. References: [1]Douglas S. Harned and D. D. Schnack, Semi-implicit method for long time scale magnetohy drodynamic computations in three dimensions, JCP, Volume 65, Issue 1, July 1986, Pages 57-70. [2]U. Shumlak, B. A. Nelson, R. P. Golingo, S. L. Jackson, E. A. Crawford, and D. J. Den Hartog, Sheared flow stabilization experiments in the ZaP flow Zpinch, Phys. Plasmas 10, 1683 (2003).
NASA Astrophysics Data System (ADS)
Martinec, Zdenek; Sasgen, Ingo; Velimsky, Jakub
2014-05-01
In this study, two new methods for computing the sensitivity of the glacial isostatic adjustment (GIA) forward solution with respect to the Earth's mantle viscosity are presented: the forward sensitivity method (FSM) and the adjoint sensitivity method (ASM). These advanced formal methods are based on the time-domain,spectral-finite element method for modelling the GIA response of laterally heterogeneous earth models developed by Martinec (2000). There are many similarities between the forward method and the FSM and ASM for a general physical system. However, in the case of GIA, there are also important differences between the forward and sensitivity methods. The analysis carried out in this study results in the following findings. First, the forward method of GIA is unconditionally solvable, regardless of whether or not a combined ice and ocean-water load contains the first-degree spherical harmonics. This is also the case for the FSM, however, the ASM must in addition be supplemented by nine conditions on the misfit between the given GIA-related data and the forward model predictions to guarantee the existence of a solution. This constrains the definition of data least-squares misfit. Second, the forward method of GIA implements an ocean load as a free boundary-value function over an ocean area with a free geometry. That is, an ocean load and the shape of ocean, the so-called ocean function, are being sought, in addition to deformation and gravity-increment fields, by solving the forward method. The FSM and ASM also apply the adjoint ocean load as a free boundary-value function, but instead over an ocean area with the fixed geometry given by the ocean function determined by the forward method. In other words, a boundary-value problem for the forward method of GIA is free with respect to determining (i) the boundary-value data over an ocean area and (ii) the ocean function itself, while the boundary-value problems for the FSM and ASM are free only with respect to
A one step real-time RT-PCR assay for the quantitation of Wheat yellow mosaic virus (WYMV)
2013-01-01
Background Wheat yellow mosaic virus (WYMV) is an important pathogen in China and other countries. It is the member of the genus Bymovirus and transmitted primarily by Polymyxa graminis. The incidence of wheat infections in endemic areas has risen in recent years. Prompt and dependable identification of WYMV is a critical component of response to suspect cases. Methods In this study, a one step real-time RT-PCR, followed by standard curve analysis for the detection and identification of WYMV, was developed. Two reference genes, 18s RNA and β-actin were selected in order to adjust the veracity of the real-time RT-PCR assay. Results We developed a one-step Taqman-based real-time quantitative RT-PCR (RT-qPCR) assay targeting the conserved region of the 879 bp long full-length WYMV coat protein gene. The accuracy of normalized data was analyzed along with appropriate internal control genes: β-actin and 18s rRNA which were included in detecting of WYMV-infected wheat leaf tissues. The detectable end point sensitivity in RT-qPCR assay was reaching the minimum limit of the quantitative assay and the measurable copy numbers were about 30 at106-fold dilution of total RNA. This value was close to 104-fold more sensitive than that of indirect enzyme-linked immunosorbent assay. More positive samples were detected by RT-qPCR assay than gel-based RT-PCR when detecting the suspected samples collected from 8 regions of China. Based on presented results, RT-qPCR will provide a valuable method for the quantitative detection of WYMV. Conclusions The Taqman-based RT-qPCR assay is a faster, simpler, more sensitive and less expensive procedure for detection and quantification of WYMV than other currently used methods. PMID:23725024
Error correction in short time steps during the application of quantum gates
NASA Astrophysics Data System (ADS)
de Castro, L. A.; Napolitano, R. d. J.
2016-04-01
We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.
NASA Astrophysics Data System (ADS)
Kuraz, Michal
2016-06-01
Modelling the transport processes in a vadose zone, e.g. modelling contaminant transport or the effect of the soil water regime on changes in soil structure and composition, plays an important role in predicting the reactions of soil biotopes to anthropogenic activity. Water flow is governed by the quasilinear Richards equation. The paper concerns the implementation of a multi-time-step approach for solving a nonlinear Richards equation. When modelling porous media flow with a Richards equation, due to a possible convection dominance and a convergence of a nonlinear solver, a stable finite element approximation requires accurate temporal and spatial integration. The method presented here enables adaptive domain decomposition algorithm together with a multi-time-step treatment of actively changing subdomains.
ERIC Educational Resources Information Center
Pitetti, Kenneth H.; Beets, Michael W.; Flaming, Judy
2009-01-01
Pedometer accuracy for steps and activity time during dynamic movement for youth with intellectual disabilities (ID) were examined. Twenty-four youth with ID (13 girls, 13.1 [plus or minus] 3.2 yrs; 11 boys, 14.7 [plus or minus] 2.7 yrs) were videotaped during adapted physical education class while wearing a Walk4Life 2505 pedometer in five…
NASA Astrophysics Data System (ADS)
Cavalcanti, José Rafael; Dumbser, Michael; Motta-Marques, David da; Fragoso Junior, Carlos Ruberto
2015-12-01
In this article we propose a new conservative high resolution TVD (total variation diminishing) finite volume scheme with time-accurate local time stepping (LTS) on unstructured grids for the solution of scalar transport problems, which are typical in the context of water quality simulations. To keep the presentation of the new method as simple as possible, the algorithm is only derived in two space dimensions and for purely convective transport problems, hence neglecting diffusion and reaction terms. The new numerical method for the solution of the scalar transport is directly coupled to the hydrodynamic model of Casulli and Walters (2000) that provides the dynamics of the free surface and the velocity vector field based on a semi-implicit discretization of the shallow water equations. Wetting and drying is handled rigorously by the nonlinear algorithm proposed by Casulli (2009). The new time-accurate LTS algorithm allows a different time step size for each element of the unstructured grid, based on an element-local Courant-Friedrichs-Lewy (CFL) stability condition. The proposed method does not need any synchronization between different time steps of different elements and is by construction locally and globally conservative. The LTS scheme is based on a piecewise linear polynomial reconstruction in space-time using the MUSCL-Hancock method, to obtain second order of accuracy in both space and time. The new algorithm is first validated on some classical test cases for pure advection problems, for which exact solutions are known. In all cases we obtain a very good level of accuracy, showing also numerical convergence results; we furthermore confirm mass conservation up to machine precision and observe an improved computational efficiency compared to a standard second order TVD scheme for scalar transport with global time stepping (GTS). Then, the new LTS method is applied to some more complex problems, where the new scalar transport scheme has also been coupled to
26 CFR 1.754-1 - Time and manner of making election to adjust basis of partnership property.
Code of Federal Regulations, 2013 CFR
2013-04-01
... basis of partnership property. 1.754-1 Section 1.754-1 Internal Revenue INTERNAL REVENUE SERVICE... Part II, Subchapter K, Chapter 1 of the Code § 1.754-1 Time and manner of making election to adjust.... (b) Time and method of making election. (1) An election under section 754 and this section to...
26 CFR 1.754-1 - Time and manner of making election to adjust basis of partnership property.
Code of Federal Regulations, 2014 CFR
2014-04-01
... basis of partnership property. 1.754-1 Section 1.754-1 Internal Revenue INTERNAL REVENUE SERVICE... Part II, Subchapter K, Chapter 1 of the Code § 1.754-1 Time and manner of making election to adjust.... (b) Time and method of making election. (1) An election under section 754 and this section to...
26 CFR 1.754-1 - Time and manner of making election to adjust basis of partnership property.
Code of Federal Regulations, 2012 CFR
2012-04-01
... basis of partnership property. 1.754-1 Section 1.754-1 Internal Revenue INTERNAL REVENUE SERVICE... Part II, Subchapter K, Chapter 1 of the Code § 1.754-1 Time and manner of making election to adjust.... (b) Time and method of making election. (1) An election under section 754 and this section to...
Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît
2016-04-12
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826
Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît
2016-01-01
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826
Real-Time, Single-Step Bioassay Using Nanoplasmonic Resonator With Ultra-High Sensitivity
NASA Technical Reports Server (NTRS)
Zhang, Xiang (Inventor); Ellman, Jonathan A. (Inventor); Chen, Fanqing Frank (Inventor); Su, Kai-Hang (Inventor); Wei, Qi-Huo (Inventor); Sun, Cheng (Inventor)
2014-01-01
A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.
Real-time, single-step bioassay using nanoplasmonic resonator with ultra-high sensitivity
Zhang, Xiang; Ellman, Jonathan A; Chen, Fanqing Frank; Su, Kai-Hang; Wei, Qi-Huo; Sun, Cheng
2014-04-01
A nanoplasmonic resonator (NPR) comprising a metallic nanodisk with alternating shielding layer(s), having a tagged biomolecule conjugated or tethered to the surface of the nanoplasmonic resonator for highly sensitive measurement of enzymatic activity. NPRs enhance Raman signals in a highly reproducible manner, enabling fast detection of protease and enzyme activity, such as Prostate Specific Antigen (paPSA), in real-time, at picomolar sensitivity levels. Experiments on extracellular fluid (ECF) from paPSA-positive cells demonstrate specific detection in a complex bio-fluid background in real-time single-step detection in very small sample volumes.
Moment tensor inversion of waveforms: a two-step time-frequency approach
NASA Astrophysics Data System (ADS)
Vavryčuk, Václav; Kühn, Daniela
2012-09-01
We present a moment tensor inversion of waveforms, which is more robust and yields more stable and more accurate results than standard approaches. The inversion is performed in two steps and combines inversions in time and frequency domains. First, the inversion for the source-time function is performed in the frequency domain using complex spectra. Second, the time-domain inversion for the moment tensor is performed using the source-time function calculated in the first step. In this way, we can consider a realistic, complex source-time function and still keep the final moment tensor inversion linear. Using numerical modelling, we compare the efficiency and accuracy of the proposed approach with standard waveform inversions. We study the sensitivity of the retrieved double-couple and non-double-couple components of the moment tensors to noise in the data, to inaccuracies of the location and of the velocity model, and to the type of the focal mechanism. Finally, the proposed moment tensor inversion is tested on real data observed in a complex 3-D inhomogeneous geological environment: a production blast and a rockburst in the Pyhäsalmi ore mine, Finland.
Positive change following adversity and psychological adjustment over time in abused foster youth.
Valdez, Christine E; Lim, Ban Hong Phylice; Parker, Christopher P
2015-10-01
Many foster youth experience maltreatment in their family-of-origin and additional maltreatment while in foster care. Not surprisingly, rates of depression are higher in foster youth than the general population, and peak during ages 17-19 during the stressful transition into adulthood. However, no known studies have reported on whether foster youth perceive positive changes following such adversity, and whether positive change facilitates psychological adjustment over time. The current study examined components of positive change (i.e., compassion for others and self-efficacy) with depression severity from age 17 to 18 as youth prepared to exit foster care. Participants were youth from the Mental Health Service Use of Youth Leaving Foster Care study who endorsed child maltreatment. Components of positive change and severity of abuse were measured initially. Depression was measured initially and every three months over the following year. Latent growth curve modeling was used to examine the course of depression as a function of initial levels of positive change and severity of abuse. Results revealed that decreases in depression followed an inverse quadratic function in which the steepest declines occurred in the first three months and leveled off after that. Severity of abuse was positively correlated with higher initial levels of depression and negatively correlated with decreases in depression. Greater self-efficacy was negatively associated with initial levels of depression and predicted decreases in depression over the year, whereas compassion for others was neither associated with initial depression nor changes in depression. Implications for intervention, theory, and research are discussed. PMID:26210859
NASA Technical Reports Server (NTRS)
Wolf, Stephen W. D.; Goodyer, Michael J.
1988-01-01
Following the realization that a simple iterative strategy for bringing the flexible walls of two-dimensional test sections to streamline contours was too slow for practical use, Judd proposed, developed, and placed into service what was the first Predictive Strategy. The Predictive Strategy reduced by 75 percent or more the number of iterations of wall shapes, and therefore the tunnel run-time overhead attributable to the streamlining process, required to reach satisfactory streamlines. The procedures of the Strategy are embodied in the FORTRAN subroutine WAS (standing for Wall Adjustment Strategy) which is written in general form. The essentials of the test section hardware, followed by the underlying aerodynamic theory which forms the basis of the Strategy, are briefly described. The subroutine is then presented as the Appendix, broken down into segments with descriptions of the numerical operations underway in each, with definitions of variables.
Rosario, Margaret; Schrimshaw, Eric W.; Hunter, Joyce
2010-01-01
Despite research documenting variability in the sexual identity development of lesbian, gay, and bisexual (LGB) youths, it remains unclear whether different developmental patterns have implications for the psychological adjustment of LGB youths. The current report longitudinally examines whether different patterns of LGB identity formation and integration are associated with indicators of psychological adjustment among an ethnically diverse sample of 156 LGB youths (ages 14 – 21) in New York City. Although differences in the timing of identity formation were not associated with psychological adjustment, greater identity integration was related to less depressive and anxious symptoms, fewer conduct problems, and higher self-esteem both cross-sectionally and longitudinally. Individual changes in identity integration over time were associated with all four aspects of psychological adjustment, even after controlling for rival hypotheses concerning family and friend support, gay-related stress, negative social relationships, and other covariates. These findings suggest that difficulties in developing an integrated LGB identity may have negative implications for the psychological adjustment of LGB youths and that efforts to reduce distress among LGB youths should address the youths’ identity integration. PMID:19916104
Sensitivity of The High-resolution Wam Model With Respect To Time Step
NASA Astrophysics Data System (ADS)
Kasemets, K.; Soomere, T.
The northern part of the Baltic Proper and its subbasins (Bothnian Sea, the Gulf of Finland, Moonsund) serve as a challenge for wave modellers. In difference from the southern and the eastern parts of the Baltic Sea, their coasts are highly irregular and contain many peculiarities with the characteristic horizontal scale of the order of a few kilometres. For example, the northern coast of the Gulf of Finland is extremely ragged and contains a huge number of small islands. Its southern coast is more or less regular but has up to 50m high cliff that is frequently covered by high forests. The area also contains numerous banks that have water depth a couple of meters and that may essentially modify wave properties near the banks owing to topographical effects. This feature suggests that a high-resolution wave model should be applied for the region in question, with a horizontal resolution of an order of 1 km or even less. According to the Courant-Friedrich-Lewy criterion, the integration time step for such models must be of the order of a few tens of seconds. A high-resolution WAM model turns out to be fairly sensitive with respect to the particular choice of the time step. In our experiments, a medium-resolution model for the whole Baltic Sea was used, with the horizontal resolution 3 miles (3' along latitudes and 6' along longitudes) and the angular resolution 12 directions. The model was run with steady wind blowing 20 m/s from different directions and with two time steps (1 and 3 minutes). For most of the wind directions, the rms. difference of significant wave heights calculated with differ- ent time steps did not exceed 10 cm and typically was of the order of a few per cents. The difference arose within a few tens of minutes and generally did not increase in further computations. However, in the case of the north wind, the difference increased nearly monotonously and reached 25-35 cm (10-15%) within three hours of integra- tion whereas mean of significant wave
Large time-step stability of explicit one-dimensional advection schemes
NASA Technical Reports Server (NTRS)
Leonard, B. P.
1993-01-01
There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.
Space-time variability of floods across Germany: Gradual trends, step changes and fluctuations
NASA Astrophysics Data System (ADS)
Merz, Bruno; Vorogushyn, Sergiy; Viet Dung, Nguyen; Schröter, Kai
2015-04-01
The space-time variability of flood magnitude and frequency across Germany at the interannual and decadal time scale is analyzed and interpreted. The analyses are based on flood time series of 68 catchments for a joint period of 74 years. The catchments are distributed across Germany and show different flood regimes. Different statistical tests are applied to investigate different types of flood changes: gradual trends, step changes and fluctuations. In addition, changes in the mean behavior and in the variability are studied. A focus is placed on the spatial stability of changes, i.e. answering the question to which extent flood changes are coherent across Germany. The joint analysis of changes for a large number of catchments allows interpreting the causes of the observed changes. For instance, climate-related flood changes are expected to show a different behavior than changes caused by river training or land-use change.
NASA Astrophysics Data System (ADS)
Sheridan, J. A.; Bloom, D. M.; Solomon, P. M.
1995-03-01
We have built a system capable of measuring the step response of III-V electronic devices on the picosecond time scale, with no alteration in device design or epitaxy. To switch on the device under test (DUT), we have designed and fabricated a new type of photoconductor, the recessed-ohmic photoconductor, which swings 0.45 V with a 2-ps rise time and maintains constant output voltage for 100 ps. This switch is monolithically integrated with the DUT. To measure the output current of the DUT, we have built a Ti:sapphire-laser-based pump-probe direct electro-optic sampling system that has a minimum detectable voltage of 70 mu V / \\radical Hz \\end-radical and a measurement bandwidth of 750 GHz. The overall system, comprised of the recessed ohmic photoconductor and the electro-optic sampling system, can be used to measure the step response of III-V electronic devices on the picosecond time scale.
A class of large time step Godunov schemes for hyperbolic conservation laws and applications
NASA Astrophysics Data System (ADS)
Qian, ZhanSen; Lee, Chun-Hian
2011-08-01
A large time step (LTS) Godunov scheme firstly proposed by LeVeque is further developed in the present work and applied to Euler equations. Based on the analysis of the computational performances of LeVeque's linear approximation on wave interactions, a multi-wave approximation on rarefaction fan is proposed to avoid the occurrences of rarefaction shocks in computations. The developed LTS scheme is validated using 1-D test cases, manifesting high resolution for discontinuities and the capability of maintaining computational stability when large CFL numbers are imposed. The scheme is then extended to multidimensional problems using dimensional splitting technique; the treatment of boundary condition for this multidimensional LTS scheme is also proposed. As for demonstration problems, inviscid flows over NACA0012 airfoil and ONERA M6 wing with given swept angle are simulated using the developed LTS scheme. The numerical results reveal the high resolution nature of the scheme, where the shock can be captured within 1-2 grid points. The resolution of the scheme would improve gradually along with the increasing of CFL number under an upper bound where the solution becomes severely oscillating across the shock. Computational efficiency comparisons show that the developed scheme is capable of reducing the computational time effectively with increasing the time step (CFL number).
ERIC Educational Resources Information Center
Galster, George; Temkin, Kenneth; Walker, Chris; Sawyer, Noah
2004-01-01
The authors contribute to the development of empirical methods for measuring the impacts of place-based local development strategies by introducing the adjusted interrupted time-series (AITS) approach. It estimates a more precise counterfactual scenario, thus offering a stronger basis for drawing causal inferences about impacts. The authors…
A multistage time-stepping scheme for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1985-01-01
A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.
Imaginary Time Step Method to Solve the Dirac Equation with Nonlocal Potential
Zhang Ying; Liang Haozhao; Meng Jie
2009-08-26
The imaginary time step (ITS) method is applied to solve the Dirac equation with nonlocal potentials in coordinate space. Taking the nucleus {sup 12}C as an example, even with nonlocal potentials, the direct ITS evolution for the Dirac equation still meets the disaster of the Dirac sea. However, following the recipe in our former investigation, the disaster can be avoided by the ITS evolution for the corresponding Schroedinger-like equation without localization, which gives the convergent results exactly the same with those obtained iteratively by the shooting method with localized effective potentials.
A matter of timing: developmental theories of romantic involvement and psychosocial adjustment.
Furman, Wyndol; Collibee, Charlene
2014-11-01
The present study compared two theories of the association between romantic involvement and adjustment: a social timetable theory and a developmental task theory. We examined seven waves of longitudinal data on a community based sample of 200 participants (Wave 1 mean age = 15 years, 10 months). In each wave, multiple measures of substance use, externalizing symptoms, and internalizing symptoms were gathered, typically from multiple reporters. Multilevel modeling revealed that greater levels of romantic involvement in adolescence were associated with higher levels of substance use and externalizing symptoms but became associated with lower levels in adulthood. Having a romantic partner was associated with greater levels of substance use, externalizing symptoms, and internalizing symptoms in adolescence but was associated with lower levels in young adulthood. The findings were not consistent with a social timetable theory, which predicts that nonnormative involvement is associated with poor adjustment. Instead, the findings are consistent with a developmental task theory, which predicts that precocious romantic involvement undermines development and adaptation, but when romantic involvement becomes a salient developmental task in adulthood, it is associated with positive adjustment. Discussion focuses on the processes that may underlie the changing nature of the association between romantic involvement and adjustment. PMID:24703413
ERIC Educational Resources Information Center
Neiderhiser, Jenae M.; Reiss, David; Plomin, Robert; Hetherington, E. Mavis
1999-01-01
Examined the genetic and environmental contributions to the predictive association between parenting and adolescent adjustment in identical and fraternal twins, and full, half, and genetically unrelated siblings in nondivorced and stepfamilies. Found that cross-lagged associations between parental conflict-negativity and adolescent antisocial…
ERIC Educational Resources Information Center
Safarkhani, Maryam; Moerbeek, Mirjam
2013-01-01
In a randomized controlled trial, a decision needs to be made about the total number of subjects for adequate statistical power. One way to increase the power of a trial is by including a predictive covariate in the model. In this article, the effects of various covariate adjustment strategies on increasing the power is studied for discrete-time…
A simple method for improving the time-stepping accuracy in atmosphere and ocean models
NASA Astrophysics Data System (ADS)
Williams, P. D.
2012-12-01
In contemporary numerical simulations of the atmosphere and ocean, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. A common time-stepping method in atmosphere and ocean models is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter, which has become known as the RAW filter (Williams 2009, 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other atmosphere and ocean models. References PD Williams (2009) A
NASA Astrophysics Data System (ADS)
Ho, C. Y.; Leung, R. C. K.; Zhou, K.; Lam, G. C. Y.; Jiang, Z.
2011-09-01
One-step direct aeroacoustic simulation (DAS) has received attention from aerospace and mechanical high-pressure fluid-moving system manufacturers for quite some time. They aim to simulate the unsteady flow and acoustic field in the duct simultaneously in order to investigate the aeroacoustic generation mechanisms. Because of the large length and energy scale disparities between the acoustic far field and the aerodynamic near field, highly accurate and high-resolution simulation scheme is required. This involves the use of high order compact finite difference and time advancement schemes in simulation. However, in this situation, large buffer zones are always needed to suppress the spurious numerical waves emanating from computational boundaries. This further increases the computational resources to yield accurate results. On the other hand, for such problem as supersonic jet noise, the numerical scheme should be able to resolve both strong shock waves and weak acoustic waves simultaneously. Usually numerical aeroa-coustic scheme that is good for low Mach number flow is not able to give satisfactory simulation results for shock wave. Therefore, the aeroacoustic research community has been looking for a more efficient one-step DAS scheme that has the comparable accuracy to the finite-difference approach with smaller buffer regions, yet is able to give accurate solutions from subsonic to supersonic flows. The conservation element and solution element (CE/SE) scheme is one of the possible schemes satisfying the above requirements. This paper aims to report the development of a CE/SE scheme for one-step DAS and illustrate its robustness and effectiveness with two selected benchmark problems.
Hasegawa, Yuka
2010-06-01
The purpose of this study is twofold: (a) to examine how women adjust their time allocation when they become working mothers; and (b) to assess the effect of their adjustment on their daily emotional experience. Using a methodology based on the Day Reconstruction Method which is designed to reduce systematic bias, seven women responded to a questionnaire during parental leave (T1), within 1 month after returning to work (T2), and 3 months after returning to work (T3). The results revealed that most of the participants tended to utilize the time available to them for sleep and child care by decreasing housework and leisure. They experienced increased pressure in terms of time and felt more or equally energetic or intimate toward their families in both T2 and T3. The other participants, who had less time available for sleep or meals, experienced increased depression or tiredness. PMID:20597356
Mazzeschi, Claudia; Pazzagli, Chiara; Radi, Giulia; Raspa, Veronica; Buratta, Livia
2015-01-01
The transition to parenthood is widely considered a period of increased vulnerability often accompanied by stress. Abidin conceived parenting stress as referring to specific difficulties in adjusting to the parenting role. Most studies of psychological distress arising from the demands of parenting have investigated the impact of stress on the development of dysfunctional parent-child relationships and on adult and child psychopathology. Studies have largely focused on mothers' postnatal experience; less attention has been devoted to maternal prenatal characteristics associated with subsequent parental stress and studies of maternal prenatal predictors are few. Furthermore, no studies have examined that association exclusively with samples of first-time mothers. With an observational prospective study design with two time periods, the aim of this study was to investigate the role of mothers' attachment style, maternal prenatal attachment to the fetus and dyadic adjustment during pregnancy (7th months of gestation) and their potential unique contribution to parenting stress 3 months after childbirth in a sample of nulliparous women. Results showed significant correlations between antenatal measures. Maternal attachment style (especially relationship anxiety) was negatively correlated with prenatal attachment and with dyadic adjustment; positive correlations resulted between prenatal attachment and dyadic adjustment. Each of the investigated variables was also good predictor of parenting stress 3 months after childbirth. Findings suggested how these dimensions could be considered as risk factors in the transition to motherhood and in the very beginning of the emergence of the caregiving system, especially with first-time mothers. PMID:26441808
Mazzeschi, Claudia; Pazzagli, Chiara; Radi, Giulia; Raspa, Veronica; Buratta, Livia
2015-01-01
The transition to parenthood is widely considered a period of increased vulnerability often accompanied by stress. Abidin conceived parenting stress as referring to specific difficulties in adjusting to the parenting role. Most studies of psychological distress arising from the demands of parenting have investigated the impact of stress on the development of dysfunctional parent–child relationships and on adult and child psychopathology. Studies have largely focused on mothers’ postnatal experience; less attention has been devoted to maternal prenatal characteristics associated with subsequent parental stress and studies of maternal prenatal predictors are few. Furthermore, no studies have examined that association exclusively with samples of first-time mothers. With an observational prospective study design with two time periods, the aim of this study was to investigate the role of mothers’ attachment style, maternal prenatal attachment to the fetus and dyadic adjustment during pregnancy (7th months of gestation) and their potential unique contribution to parenting stress 3 months after childbirth in a sample of nulliparous women. Results showed significant correlations between antenatal measures. Maternal attachment style (especially relationship anxiety) was negatively correlated with prenatal attachment and with dyadic adjustment; positive correlations resulted between prenatal attachment and dyadic adjustment. Each of the investigated variables was also good predictor of parenting stress 3 months after childbirth. Findings suggested how these dimensions could be considered as risk factors in the transition to motherhood and in the very beginning of the emergence of the caregiving system, especially with first-time mothers. PMID:26441808
Detection of Zika virus by SYBR green one-step real-time RT-PCR.
Xu, Ming-Yue; Liu, Si-Qing; Deng, Cheng-Lin; Zhang, Qiu-Yan; Zhang, Bo
2016-10-01
The ongoing Zika virus (ZIKV) outbreak has rapidly spread to new areas of Americas, which were the first transmissions outside its traditional endemic areas in Africa and Asia. Due to the link with newborn defects and neurological disorder, numerous infected cases throughout the world and various mosquito vectors, the virus has been considered to be an international public health emergency. In the present study, we developed a SYBR Green based one-step real-time RT-PCR assay for rapid detection of ZIKV. Our results revealed that the real-time assay is highly specific and sensitive in detection of ZIKV in cell samples. Importantly, the replication of ZIKV at different time points in infected cells could be rapidly monitored by the real-time RT-PCR assay. Specifically, the real-time RT-PCR showed acceptable performance in measurement of infectious ZIKV RNA. This assay could detect ZIKV at a titer as low as 1PFU/mL. The real-time RT-PCR assay could be a useful tool for further virology surveillance and diagnosis of ZIKV. PMID:27444120
Multiple ``time step'' Monte Carlo simulations: Application to charged systems with Ewald summation
NASA Astrophysics Data System (ADS)
Bernacki, Katarzyna; Hetényi, Balázs; Berne, B. J.
2004-07-01
Recently, we have proposed an efficient scheme for Monte Carlo simulations, the multiple "time step" Monte Carlo (MTS-MC) [J. Chem. Phys. 117, 8203 (2002)] based on the separation of the potential interactions into two additive parts. In this paper, the structural and thermodynamic properties of the simple point charge water model combined with the Ewald sum are compared for the MTS-MC real-/reciprocal-space split of the Ewald summation and the common Metropolis Monte Carlo method. We report a number of observables as a function of CPU time calculated using MC and MTS-MC. The correlation functions indicate that speedups on the order of 4.5-7.5 can be obtained for systems of 108-500 waters for n=10 splitting parameter.
The multiple time step r-RESPA procedure and polarizable potentials based on induced dipole moments
NASA Astrophysics Data System (ADS)
Masella, Michel
In the present study, we present an accelerating scheme based on the reversible multiple time step r-RESPA method to be used in molecular dynamics simulations with polarizable potentials based on induced dipole moments. Even if the induced dipoles are estimated with an iterative self-consistent procedure, this scheme significantly reduces the CPU time needed to perform a molecular dynamics simulation, up to a factor 2, as compared to the Car-Parrinello method where additional dynamical variables are introduced for the treatment of the induced dipoles. The tests show that stable and reliable molecular dynamics trajectories can be generated with that scheme, and that the physical properties derived from the trajectories are equivalent to those computed with the classical all atom iterative approach and the Car-Parrinello one.
An Efficient Time-Stepping Scheme for Ab Initio Molecular Dynamics Simulations
NASA Astrophysics Data System (ADS)
Tsuchida, Eiji
2016-08-01
In ab initio molecular dynamics simulations of real-world problems, the simple Verlet method is still widely used for integrating the equations of motion, while more efficient algorithms are routinely used in classical molecular dynamics. We show that if the Verlet method is used in conjunction with pre- and postprocessing, the accuracy of the time integration is significantly improved with only a small computational overhead. We also propose several extensions of the algorithm required for use in ab initio molecular dynamics. The validity of the processed Verlet method is demonstrated in several examples including ab initio molecular dynamics simulations of liquid water. The structural properties obtained from the processed Verlet method are found to be sufficiently accurate even for large time steps close to the stability limit. This approach results in a 2× performance gain over the standard Verlet method for a given accuracy. We also show how to generate a canonical ensemble within this approach.
Goeke-Morey, Marcie C; Taylor, Laura K; Merrilees, Christine E; Shirlow, Peter; Cummings, E Mark
2014-12-01
A growing literature supports the importance of understanding the link between religiosity and youths' adjustment and development, but in the absence of rigorous, longitudinal designs, questions remain about the direction of effect and the role of family factors. This paper investigates the bidirectional association between adolescents' relationship with God and their internalizing adjustment. Results from 2-wave, SEM cross-lag analyses of data from 667 mother/adolescent dyads in Belfast, Northern Ireland (50% male, M age = 15.75 years old) supports a risk model suggesting that greater internalizing problems predict a weaker relationship with God 1 year later. Significant moderation analyses suggest that a stronger relationship with God predicted fewer depression and anxiety symptoms for youth whose mothers used more religious coping. PMID:24955590
Social functioning and adjustment in Chinese children: the imprint of historical time.
Chen, Xinyin; Cen, Guozhen; Li, Dan; He, Yunfeng
2005-01-01
This study examined, in 3 cohorts (1990, 1998, and 2002) of elementary school children (M age=10 years), relations between social functioning and adjustment in different phases of the societal transition in China. Data were obtained from multiple sources. The results indicate that sociability-cooperation was associated with peer acceptance and teacher-rated competence, whereas aggression was associated with social and school difficulties in all 3 cohorts. The effect of different social contexts was reflected mainly in the relations between shyness-sensitivity and adjustment. Whereas shyness was associated with social and academic achievement in the 1990 cohort, the associations became weaker or nonsignificant in the 1998 cohort. Furthermore, shyness was associated with peer rejection, school problems, and depression in the 2002 cohort. PMID:15693766
Goeke-Morey, Marcie C.; Taylor, Laura K.; Merrilees, Christine E.; Shirlow, Peter; Cummings, E. Mark
2015-01-01
A growing literature supports the importance of understanding the link between religiosity and youths’ adjustment and development, but in the absence of rigorous, longitudinal designs, questions remain about the direction of effect and the role of family factors. This paper investigates the bi-directional association between adolescents’ relationship with God and their internalizing adjustment. Results from two-wave, SEM cross-lag analyses of data from 667 mother/adolescent dyads in Belfast, Northern Ireland (50% male, M age = 15.75 years old) supports a risk model suggesting that greater internalizing problems predicts a weaker relationship with God one year later. Significant moderation analyses suggest that a stronger relationship with God predicted fewer depression and anxiety symptoms for youth whose mothers used more religious coping. PMID:24955590
NASA Astrophysics Data System (ADS)
Gupta, Shubhangi; Wohlmuth, Barbara; Helmig, Rainer
2016-05-01
We present an extrapolation-based semi-implicit multi-rate time stepping (MRT) scheme and a compound-fast MRT scheme for a naturally partitioned, multi-time-scale hydro-geomechanical hydrate reservoir model. We evaluate the performance of the two MRT methods compared to an iteratively coupled solution scheme and discuss their advantages and disadvantages. The performance of the two MRT methods is evaluated in terms of speed-up and accuracy by comparison to an iteratively coupled solution scheme. We observe that the extrapolation-based semi-implicit method gives a higher speed-up but is strongly dependent on the relative time scales of the latent (slow) and active (fast) components. On the other hand, the compound-fast method is more robust and less sensitive to the relative time scales, but gives lower speed up as compared to the semi-implicit method, especially when the relative time scales of the active and latent components are comparable.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
The two-step shape and timing of the last deglaciation in Antarctica
Jouzel, J.; Petit, J.R. |; Duclos, Y.
1995-04-01
The two-step character of the last deglaciation is well recognized in Western Europe, in Greenland and in the North Atlantic. For example, in Greenland, a gradual temperature decrease started at the Boelling (B) around 14.5 ky BP, spanned through the Alleroed (A) and was followed by the cold Younger Dryas (YD) event which terminated abruptly around 11.5 ky BP. Recent results suggest that this BA/YD sequence may have extended throughout all the Northern Hemisphere but the evidence of a late transition cooling is still poor for the Southern Hemisphere. Here we present a detailed isotopic record analyzed in a new ice core drilled at Dome B in East Antarctica that fully demonstrates the existence of an Antarctic cold reversal (ACR). These results suggest that the two-step shape of the last deglaciation has a worldwide character but they also point to noticeable interhemispheric differences. Thus. the coldest part of the ACR. which shows a temperature drop about three times weaker than that recorded during the YD in Greenland, may have preceded the YD. Antarctica did not experienced abrupt changes and the two warming periods started there before they started in Greenland. The links between Southern and Northern Hemisphere climates throughout this period are discussed in the light of additional information derived from the Antarctic dust record. 87 refs., 5 figs.
[Photodissociation of Acetylene and Acetone using Step-Scan Time-Resolved FTIR Emission Spectroscopy
NASA Technical Reports Server (NTRS)
McLaren, Ian A.; Wrobel, Jacek D.
1997-01-01
The photodissociation of acetylene and acetone was investigated as a function of added quenching gas pressures using step-scan time-resolved FTIR emission spectroscopy. Its main components consist of Bruker IFS88, step-scan Fourier Transform Infrared (FTIR) spectrometer coupled to a flow cell equipped with Welsh collection optics. Vibrationally excited C2H radicals were produced from the photodissociation of acetylene in the unfocused experiments. The infrared (IR) emission from these excited C2H radicals was investigated as a function of added argon pressure. Argon quenching rate constants for all C2H emission bands are of the order of 10(exp -13)cc/molecule.sec. Quenching of these radicals by acetylene is efficient, with a rate constant in the range of 10(exp -11) cc/molecule.sec. The relative intensity of the different C2H emission bands did not change with the increasing argon or acetylene pressure. However, the overall IR emission intensity decreased, for example, by more than 50% when the argon partial pressure was raised from 0.2 to 2 Torr at fixed precursor pressure of 160mTorr. These observations provide evidence for the formation of a metastable C2H2 species, which are collisionally quenched by argon or acetylene. Problems encountered in the course of the experimental work are also described.
Structural damage evolution assessment using the regularised time step integration method
NASA Astrophysics Data System (ADS)
Chen, Hua-Peng; Maung, Than Soe
2014-09-01
This paper presents an approach to identify both the location and severity evolution of damage in engineering structures directly from measured dynamic response data. A relationship between the change in structural parameters such as stiffness caused by structural damage development and the measured dynamic response data such as accelerations is proposed, on the basis of the governing equations of motion for the original and damaged structural systems. Structural damage parameters associated with time are properly chosen to reflect both the location and severity development over time of damage in a structure. Basic equations are provided to solve the chosen time-dependent damage parameters, which are constructed by using the Newmark time step integration method without requiring a modal analysis procedure. The Tikhonov regularisation method incorporating the L-curve criterion for determining the regularisation parameter is then employed to reduce the influence of measurement errors in dynamic response data and then to produce stable solutions for structural damage parameters. Results for two numerical examples with various simulated damage scenarios show that the proposed method can accurately identify the locations of structural damage and correctly assess the evolution of damage severity from information on vibration measurements with uncertainties.
NASA Astrophysics Data System (ADS)
Roland, Teboh; Mavroidis, Panayiotis; Shi, Chengyu; Papanikolaou, Nikos
2010-05-01
System latency introduces geometric errors in the course of real-time target tracking radiotherapy. This effect can be minimized, for example by the use of predictive filters, but cannot be completely avoided. In this work, we present a convolution technique that can incorporate the effect as part of the treatment planning process. The method can be applied independently or in conjunction with the predictive filters to compensate for residual latency effects. The implementation was performed on TrackBeam (Initia Ltd, Israel), a prototype real-time target tracking system assembled and evaluated at our Cancer Institute. For the experimental system settings examined, a Gaussian distribution attributable to the TrackBeam latency was derived with σ = 3.7 mm. The TrackBeam latency, expressed as an average response time, was deduced to be 172 ms. Phantom investigations were further performed to verify the convolution technique. In addition, patient studies involving 4DCT volumes of previously treated lung cancer patients were performed to incorporate the latency effect in the dose prediction step. This also enabled us to effectively quantify the dosimetric and radiobiological impact of the TrackBeam and other higher latency effects on the clinical outcome of a real-time target tracking delivery.
NASA Astrophysics Data System (ADS)
Murthi, A.; Menon, S.; Sednev, I.
2011-12-01
An inherent difficulty in the ability of global climate models to accurately simulate precipitation lies in the use of a large time step, Δt (usually 30 minutes), to solve the governing equations. Since microphysical processes are characterized by small time scales compared to Δt, finite difference approximations used to advance microphysics equations suffer from numerical instability and large time truncation errors. With this in mind, the sensitivity of precipitation simulated by the atmospheric component of CESM, namely the Community Atmosphere Model (CAM 5.1), to the microphysics time step (τ) is investigated. Model integrations are carried out for a period of five years with a spin up time of about six months for a horizontal resolution of 2.5 × 1.9 degrees and 30 levels in the vertical, with Δt = 1800 s. The control simulation with τ = 900 s is compared with one using τ = 300 s for accumulated precipitation and radi- ation budgets at the surface and top of the atmosphere (TOA), while keeping Δt fixed. Our choice of τ = 300 s is motivated by previous work on warm rain processes wherein it was shown that a value of τ around 300 s was necessary, but not sufficient, to ensure positive definiteness and numerical stability of the explicit time integration scheme used to integrate the microphysical equations. However, since the entire suite of microphysical processes are represented in our case, we suspect that this might impose additional restrictions on τ. The τ = 300 s case produces differences in large-scale accumulated rainfall from the τ = 900 s case by as large as 200 mm, over certain regions of the globe. The spatial patterns of total accumulated precipitation using τ = 300 s are in closer agreement with satellite observed precipitation, when compared to the τ = 900 s case. Differences are also seen in the radiation budget with the τ = 300 (900) s cases producing surpluses that range between 1-3 W/m2 at both the TOA and surface in the global
Detection and Correction of Step Discontinuities in Kepler Flux Time Series
NASA Technical Reports Server (NTRS)
Kolodziejczak, J. J.; Morris, R. L.
2011-01-01
PDC 8.0 includes an implementation of a new algorithm to detect and correct step discontinuities appearing in roughly one of every 20 stellar light curves during a given quarter. The majority of such discontinuities are believed to result from high-energy particles (either cosmic or solar in origin) striking the photometer and causing permanent local changes (typically -0.5%) in quantum efficiency, though a partial exponential recovery is often observed [1]. Since these features, dubbed sudden pixel sensitivity dropouts (SPSDs), are uncorrelated across targets they cannot be properly accounted for by the current detrending algorithm. PDC detrending is based on the assumption that features in flux time series are due either to intrinsic stellar phenomena or to systematic errors and that systematics will exhibit measurable correlations across targets. SPSD events violate these assumptions and their successful removal not only rectifies the flux values of affected targets, but demonstrably improves the overall performance of PDC detrending [1].
Electric and hybrid electric vehicle study utilizing a time-stepping simulation
NASA Technical Reports Server (NTRS)
Schreiber, Jeffrey G.; Shaltens, Richard K.; Beremand, Donald G.
1992-01-01
The applicability of NASA's advanced power technologies to electric and hybrid vehicles was assessed using a time-stepping computer simulation to model electric and hybrid vehicles operating over the Federal Urban Driving Schedule (FUDS). Both the energy and power demands of the FUDS were taken into account and vehicle economy, range, and performance were addressed simultaneously. Results indicate that a hybrid electric vehicle (HEV) configured with a flywheel buffer energy storage device and a free-piston Stirling convertor fulfills the emissions, fuel economy, range, and performance requirements that would make it acceptable to the consumer. It is noted that an assessment to determine which of the candidate technologies are suited for the HEV application has yet to be made. A proper assessment should take into account the fuel economy and range, along with the driveability and total emissions produced.
Comparison of Fixed and Variable Time Step Trajectory Integration Methods for Cislunar Trajectories
NASA Technical Reports Server (NTRS)
Weeks, ichael W.; Thrasher, Stephen W.
2007-01-01
Due to the nonlinear nature of the Earth-Moon-Sun three-body problem and non-spherical gravity, CEV cislunar targeting algorithms will require many propagations in their search for a desired trajectory. For on-board targeting especially, the algorithm must have a simple, fast, and accurate propagator to calculate a trajectory with reasonable computation time, and still be robust enough to remain stable in the various flight regimes that the CEV will experience. This paper compares Cowell s method with a fourth-order Runge- Kutta integrator (RK4), Encke s method with a fourth-order Runge-Kutta- Nystr m integrator (RKN4), and a method known as Multi-Conic. Additionally, the study includes the Bond-Gottlieb 14-element method (BG14) and extends the investigation of Encke-Nystrom methods to integrators of higher order and with variable step size.
Nikzad, Nasim; Sahari, Mohammad A; Vanak, Zahra Piravi; Safafar, Hamed; Boland-nazar, Seyed A
2013-08-01
Weight, oil, fatty acids, tocopherol, polyphenol, and sterol properties of 5 olive cultivars (Zard, Fishomi, Ascolana, Amigdalolia, and Conservalia) during crude, lye treatment, washing, fermentation, and pasteurization steps were studied. Results showed: oil percent was higher and lower in Ascolana (crude step) and in Fishomi (pasteurization step), respectively; during processing steps, in all cultivars, oleic, palmitic, linoleic, and stearic acids were higher; the highest changes in saturated and unsaturated fatty acids were in fermentation step; the highest and the lowest ratios of ω3 / ω6 were in Ascolana (washing step) and in Zard (pasteurization step), respectively; the highest and the lowest tocopherol were in Amigdalolia and Fishomi, respectively, and major damage occurred in lye step; the highest and the lowest polyphenols were in Ascolana (crude step) and in Zard and Ascolana (pasteurization step), respectively; the major damage among cultivars occurred during lye step, in which the polyphenol reduced to 1/10 of first content; sterol did not undergo changes during steps. Reviewing of olive patents shows that many compositions of fruits such as oil quality, fatty acids, quantity and its fraction can be changed by alteration in cultivar and process. PMID:23688142
Family Time and the Psycho-Social Adjustment of Adolescent Siblings and Their Parents
ERIC Educational Resources Information Center
Crouter, Ann C.; Head, Melissa R.; Mchale, Susan M.; Tucker, Corinna Jenkins
2004-01-01
This study examined the implications of family time for first-born and second-born adolescent offspring, mothers, and fathers in 192 dual-earner families, defining family time as time shared by the foursome in activities across 7 days. Data were gathered in daily telephone interviews. For first-borns, higher levels of family time at Time 1…
Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław
2015-01-01
The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries—conjoint analysis—which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods. PMID:25674069
Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław
2015-01-01
The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries-conjoint analysis-which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods. PMID:25674069
NASA Astrophysics Data System (ADS)
Tassotto, Michael
2001-08-01
Liquid surfaces are very abundant in nature. Despite the importance of the liquid interface in general, experimental molecular-level data was almost completely lacking prior to the last decade and a half. The intent of this work is to provide a means by which experimental data on the composition of liquid surfaces and the average orientation of their constituent molecules can be obtained in order to supplement data from molecular dynamics and related computational techniques. To this end, a unique time-of-flight (TOF) spectrometer, which constitutes the backbone of a new method to study liquid surfaces, was constructed and commissioned. The performance of the spectrometer is demonstrated in a number of exemplary TOF spectra obtained from liquid glycerol. Moving from mere qualitative to quantitative surface analysis necessitates the ability to relate physical quantities such as detection efficiencies, accurate signal intensities, and interaction cross-sections for all elements to one another. As a first step, the absolute detection efficiency of a channel electron multiplier, used as particle detector in the spectrometer, was measured for the noble gas ions He+, Ar+, and Xe +. The data obtained led to an empirically derived, general expression of the detection efficiency that is applicable to particles of any atomic number. The results also show that the threshold velocity, below which a multiplier does not respond to impinging ions, cannot be regarded as independent of the ion's atomic number as previously reported in the literature. The second step involved a comprehensive investigation of ion-atom interactions and spectral features that are crucial for the processing of experimental signal intensities for quantitative analysis. For this purpose, the binary collision code MARLOWE was used in extensive trajectory calculations simulating TOF spectra. The simulation results confirm the high surface sensitivity of the technique and reveal the strong dependence of the
Database Integration: An Intial Step Towards the Deep-Time Data Infrastructure
NASA Astrophysics Data System (ADS)
Kolankowski, S. M.; Fox, P. A.; Ma, X.
2015-12-01
As our knowledge of Earth's geologic history grows, we require more robust methods of sharing immense amounts of data. Various databases across numerous disciplines have been widely utilized to offer extensive information on very specific pieces of both Earth's history and its current state, ie. fossil record, rock composition, proteins, etc. In order to gain a deeper understanding of our planet's past we must combine the resources present in our online communities. These databases could be a powerful force in identifying previously unseen correlations if used in tandem rather than as separate entities. Creating a unifying site that provides links to these databases will aid in our ability as a collaborative scientific community to utilize our findings on a larger scale. The Deep-Time Data Infrastructure is currently underway as part of a larger effort to accomplish this goal. DTDI works not to build a new database, but to integrate existing resources. This research is the beginning step in the DTDI program. To create this infrastructure, all current geologic and related databases had to be identified and their schema recorded. Using variables from their combined records, we are able to determine the best way to integrate them using common factors. The Deep-Time Data Infrastructure will allow geoscientists to bridge gaps in data and further our understanding of our planet's history.
ERIC Educational Resources Information Center
Svetcov, Eric
2005-01-01
This article provides a list of the essential steps to keeping a school's or district's network safe and sound. It describes how to establish a security architecture and approach that will continually evolve as the threat environment changes over time. The article discusses the methodology for implementing this approach and then discusses the…
Write a Research Paper One Step at a Time: Research Writing Guide.
ERIC Educational Resources Information Center
Evans, Helen, Ed.
Intended to supplement the textbook series "Houghton Mifflin English Grammar and Composition" and to offer students and classroom teachers in the secondary schools a review of research writing, this guide outlines a step-by-step process allowing for thorough student comprehension and comfort with the application of basic research and writing…
Outward Bound to the Galaxies--One Step at a Time
ERIC Educational Resources Information Center
Ward, R. Bruce; Miller-Friedmann, Jaimie; Sienkiewicz, Frank; Antonucci, Paul
2012-01-01
Less than a century ago, astronomers began to unlock the cosmic distances within and beyond the Milky Way. Understanding the size and scale of the universe is a continuing, step-by-step process that began with the remarkably accurate measurement of the distance to the Moon made by early Greeks. In part, the authors have ITEAMS (Innovative…
Herzog, W; Zatsiorsky, V; Prilutsky, B I; Leonard, T R
1994-06-01
Force-sharing among muscles during locomotion has been studied experimentally using 'representative' or 'average' step cycles. Mathematical approaches aimed at predicting individual muscle forces during locomotion are based on the assumption that force-sharing among muscles occurs in a consistent and unique way. In this study, we quantify normal variations in muscular force-time histories for step cycles executed at a given nominal speed, so that we can appreciate what it means to analyze 'representative' or 'average' step cycles and can evaluate whether these normal variations in muscular force-time histories are random or may be associated with variations in the kinematics of consecutive step cycles. Forces in gastrocnemius, soleus and plantaris muscles were measured for step cycles performed at a constant nominal speed in freely moving cats. Gastrocnemius forces were always larger than peak plantaris or soleus forces. Also, peak gastrocnemius forces typically occurred first after paw contact, followed by peak soleus and then peak plantaris forces. Furthermore, it was found that variations in muscular force-time histories were substantial and were systematically related to step-cycle durations. The results of this study suggest that findings based on 'representative' or 'average' step cycles for a given nominal speed of locomotion should be viewed cautiously and that variations in force-sharing among muscles are systematically related to variations in locomotor kinematics. PMID:7931035
NASA Astrophysics Data System (ADS)
Jordan, D. M.; Hill, J. D.; Uman, M. A.; Dwyer, J. R.; Rassoul, H. K.
2012-12-01
Time-of-arrival (TOA) techniques were used to determine the three-dimensional locations and emission times of x-ray and dE/dt sources measured at ground level in association with dart-stepped leader steps in natural and rocket-and-wire triggered lightning discharges recorded during summer 2011 at Camp Blanding, FL. The measurement network consisted of ten flat plate dE/dt antennas approximately co-located with eight plastic and two Lanthanum Bromide scintillation detectors arrayed around the launching facility over an area of about 0.25 square kilometers. For two triggered lightning dart-stepped leaders, x-ray sources were emitted from locations separated by average distances of 22.7 m and 29 m, respectively, from the locations of the associated dE/dt pulse peaks. The x-ray sources occurred beneath the dE/dt sources in 88% of the cases. X-rays were emitted from 20 ns to 2.16 μs following the dE/dt pulse peaks, with average temporal separations of 150 ns and 290 ns, respectively, for the two triggered lightning events. For one natural lightning dart-stepped leader, x-ray sources were emitted an average total distance of 39.2 m from the associated dE/dt pulse peak, and occurred beneath the location of the dE/dt source in 86% of the cases. The x-rays were emitted from 10 ns to 1.76 μs following the dE/dt pulse peak with an average temporal separation of 280 ns. In each of the three events, the altitude displacement between the dE/dt and x-ray sources dominated the total separation, accounting for 90%, 63%, and 72%, respectively, of the total separation. X-ray sources were distributed randomly in the lateral directions about the lightning channel in each event. For the triggered lightning events, x-rays were located from 2.5-83.5 μs prior to the return stroke at altitudes ranging from 24-336 m. For the natural lightning event, x-rays were located from 40.4-222.3 μs prior to the return stroke at altitudes ranging from 99-394 m. Cumulatively, 67% of the located x
A study on quality-adjusted impact of time lapse on iris recognition
NASA Astrophysics Data System (ADS)
Sazonova, Nadezhda; Hua, Fang; Liu, Xuan; Remus, Jeremiah; Ross, Arun; Hornak, Lawrence; Schuckers, Stephanie
2012-06-01
Although human iris pattern is widely accepted as a stable biometric feature, recent research has found some evidences on the aging effect of iris system. In order to investigate changes in iris recognition performance due to the elapsed time between probe and gallery iris images, we examine the effect of elapsed time on iris recognition utilizing 7,628 iris images from 46 subjects with an average of ten visits acquired over two years from a legacy database at Clarkson University. Taken into consideration the impact of quality factors such as local contrast, illumination, blur and noise on iris recognition performance, regression models are built with and without quality metrics to evaluate the degradation of iris recognition performance based on time lapse factors. Our experimental results demonstrate the decrease of iris recognition performance along with increased elapsed time based on two iris recognition system (the modified Masek algorithm and a commercial software VeriEye SDK). These results also reveal the significance of quality factors in iris recognition regression indicating the variability in match scores. According to the regression analysis, our study in this paper helps provide the quantified decrease on match scores with increased elapsed time, which indicates the possibility to implement the prediction scheme for iris recognition performance based on learning of impact on time lapse factors.
NASA Astrophysics Data System (ADS)
Hirthe, Eugenia M.; Graf, Thomas
2012-12-01
The automatic non-iterative second-order time-stepping scheme based on the temporal truncation error proposed by Kavetski et al. [Kavetski D, Binning P, Sloan SW. Non-iterative time-stepping schemes with adaptive truncation error control for the solution of Richards equation. Water Resour Res 2002;38(10):1211, http://dx.doi.org/10.1029/2001WR000720.] is implemented into the code of the HydroGeoSphere model. This time-stepping scheme is applied for the first time to the low-Rayleigh-number thermal Elder problem of free convection in porous media [van Reeuwijk M, Mathias SA, Simmons CT, Ward JD. Insights from a pseudospectral approach to the Elder problem. Water Resour Res 2009;45:W04416, http://dx.doi.org/10.1029/2008WR007421.], and to the solutal [Shikaze SG, Sudicky EA, Schwartz FW. Density-dependent solute transport in discretely-fractured geological media: is prediction possible? J Contam Hydrol 1998;34:273-91] problem of free convection in fractured-porous media. Numerical simulations demonstrate that the proposed scheme efficiently limits the temporal truncation error to a user-defined tolerance by controlling the time-step size. The non-iterative second-order time-stepping scheme can be applied to (i) thermal and solutal variable-density flow problems, (ii) linear and non-linear density functions, and (iii) problems including porous and fractured-porous media.
5 CFR 551.424 - Time spent adjusting grievances or performing representational functions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Hours of Work... shall be considered hours of work. (b) “Official time” granted an employee by an agency to perform... hours of work. This includes time spent by an employee performing such functions during regular...
ERIC Educational Resources Information Center
Grossman, Arnold H.; Foss, Alexander H.; D'Augelli, Anthony R.
2014-01-01
This study examined pubertal maturation, pubertal timing and outcomes, and the relationship of puberty and sexual identity developmental milestones among 507 lesbian, gay, and bisexual youth. The onset of menarche and spermarche occurred at the mean ages of 12.05 and 12.46, respectively. There was no statistically significant difference in…
NASA Astrophysics Data System (ADS)
Öktem, Hakan; Pearson, Ronald; Egiazarian, Karen
2003-12-01
Following the complete sequencing of several genomes, interest has grown in the construction of genetic regulatory networks, which attempt to describe how different genes work together in both normal and abnormal cells. This interest has led to significant research in the behavior of abstract network models, with Boolean networks emerging as one particularly popular type. An important limitation of these networks is that their time evolution is necessarily periodic, motivating our interest in alternatives that are capable of a wider range of dynamic behavior. In this paper we examine one such class, that of continuous-time Boolean networks, a special case of the class of Boolean delay equations (BDEs) proposed for climatic and seismological modeling. In particular, we incorporate a biologically motivated refractory period into the dynamic behavior of these networks, which exhibit binary values like traditional Boolean networks, but which, unlike Boolean networks, evolve in continuous time. In this way, we are able to overcome both computational and theoretical limitations of the general class of BDEs while still achieving dynamics that are either aperiodic or effectively so, with periods many orders of magnitude longer than those of even large discrete time Boolean networks.
The Influence of Time Spent in Outdoor Play on Daily and Aerobic Step Count in Costa Rican Children
ERIC Educational Resources Information Center
Morera Castro, Maria del Rocio
2011-01-01
The purpose of this study is to examine the influence of time spent in outdoor play (i.e., on weekday and weekend days) on daily (i.e., average step count) and aerobic step count (i.e., average moderate to vigorous physical activity [MVPA] during the weekdays and weekend days) in fifth grade Costa Rican children. It was hypothesized that: (a)…
Tremblay, Jean Christophe; Carrington, Tucker Jr.
2004-12-15
If the Hamiltonian is time dependent it is common to solve the time-dependent Schroedinger equation by dividing the propagation interval into slices and using an (e.g., split operator, Chebyshev, Lanczos) approximate matrix exponential within each slice. We show that a preconditioned adaptive step size Runge-Kutta method can be much more efficient. For a chirped laser pulse designed to favor the dissociation of HF the preconditioned adaptive step size Runge-Kutta method is about an order of magnitude more efficient than the time sliced method.
Cai, Shanqing; Beal, Deryk S; Ghosh, Satrajit S; Guenther, Frank H; Perkell, Joseph S
2014-02-01
Auditory feedback (AF), the speech signal received by a speaker's own auditory system, contributes to the online control of speech movements. Recent studies based on AF perturbation provided evidence for abnormalities in the integration of auditory error with ongoing articulation and phonation in persons who stutter (PWS), but stopped short of examining connected speech. This is a crucial limitation considering the importance of sequencing and timing in stuttering. In the current study, we imposed time-varying perturbations on AF while PWS and fluent participants uttered a multisyllabic sentence. Two distinct types of perturbations were used to separately probe the control of the spatial and temporal parameters of articulation. While PWS exhibited only subtle anomalies in the AF-based spatial control, their AF-based fine-tuning of articulatory timing was substantially weaker than normal, especially in early parts of the responses, indicating slowness in the auditory-motor integration for temporal control. PMID:24486601
ERIC Educational Resources Information Center
Mucientes, A. E.; de la Pena, M. A.
2009-01-01
The concentration-time integrals method has been used to solve kinetic equations of parallel-consecutive first-order reactions with a reversible step. This method involves the determination of the area under the curve for the concentration of a given species against time. Computer techniques are used to integrate experimental curves and the method…
Lavner, Justin A; Waterman, Jill; Peplau, Letitia Anne
2014-01-01
Although increasing numbers of gay and lesbian individuals and couples are adopting children, gay men and lesbian women continue to face increased scrutiny and legal obstacles from the child welfare system. To date, little research has compared the experiences of gay or lesbian and heterosexual adoptive parents over time, limiting conceptual understandings of the similarities they share and the unique challenges that gay and lesbian adoptive parents may face. This study compared the adoption satisfaction, depressive symptoms, parenting stress, and social support at 2, 12, and 24 months postplacement of 82 parents (60 heterosexual, 15 gay, 7 lesbian) adopting children from foster care in Los Angeles County. Few differences were found between heterosexual and gay or lesbian parents at any of the assessments or in their patterns of change over time. On average, parents in both household types reported significant increases in adoption satisfaction and maintained low, nonclinical levels of depressive symptoms and parenting stress over time. Across all family types, greater parenting stress was associated with more depressive symptoms and lower adoption satisfaction. Results indicated many similarities between gay or lesbian and heterosexual adoptive parents, and highlight a need for services to support adoptive parents throughout the transition to parenthood to promote their well-being. (PsycINFO Database Record (c) 2014 APA, all rights reserved). PMID:24826826
NASA Astrophysics Data System (ADS)
Willems, Patrick
2014-03-01
Common problems faced by rainfall-runoff modellers are data limitation, model overparameterization and related problems of parameter identifiability. Depending on the application, possible solutions to overcome these problems include the use of parsimonious conceptual models, avoid the use of a fixed pre-defined model conceptualization, but apply a “top-down” or “downward” method to allow the model structure to be adjusted or inferred from available data and field evidence. This paper presents a top-down procedure that starts from a generalized model structure framework that is adjusted in a case-specific parsimonious way. The model-structure building is done in a transparent, step-wise way, where separate parts of the model structure are identified and calibrated based on multiple and non-commensurable information derived from river flow series by means of a number of sequential time series processing tasks. These include separation of the high frequency (e.g., hourly, daily) river flow series into subflows, split of the series in approx. independent quick and slow flow hydrograph periods, and the extraction of independent peak and low flows. The model building and calibration account for the statistical assumptions and requirements on independency and homoscedasticity of the model residuals. Next to identification of the subflow recessions and related routing submodels, equations describing quick and slow runoff sub-responses and soil water storage are derived from the time series data. The method includes testing of the model performance for peak and low flow extremes.
Hall, D.L.; Gardenier, T.K.; Slavich, A.L.
1984-10-01
This report deals specifically with changes made to the survey forms in January 1981 and the resulting changes to the data-time series. Naturally, when a series has changed at some time point, the data after the change are no longer comparable to those before. In many cases, though, comparisons are desired that use pre- and post-intervention data as a series. It is thus necessary to have a methodology for updating the older data so that such comparisons can be made validly. To produce this methodology, the particular intervention must be modeled. However, when attempting to analyze one particular intervention, other types of interventions must be considered also. If effects of the other interventions can be modeled, the overall variability of the series can be reduced and the intervention of interest can be better isolated. Thus, in the following, we discuss (in addition to the format modifications of the forms) the trends and changes noted in the JPRS since January 1976 to December 1982. The year 1976 was chosen since it corresponds to the first year for which microdata are computerized in a universal format in the JPRS master files. We discuss, in particular, changes to the data series for inventories of: (a) motor gasoline, (b) distillate oil, (c) residual fuel oil, and (d) crude oil. These are the series studied in detail in subsequent sections of this report.
A Step Response Based Mixed-Signal BIST Approach for Continuous-time Linear Circuits
NASA Technical Reports Server (NTRS)
Walker, Alvernon; Lala, P. K.
2001-01-01
A new Mixed-Signal Built-in self-test approach that is based upon the step response of a reconfigurable (or multifunction) analog block is presented in this paper. The technique requires the overlapping step response of the Circuit Under Test (CUT) for two circuit configurations. Each configuration can be realized by changing the topology of the CUT or by sampling two CUT nodes with differing step responses. The technique can effectively detect both soft and hard faults and does not require an analog-to-digital converter (ADC) and/or digital-to-analog converter(DAC). It also does not require any precision voltage sources or comparators. This approach does not require any additional analog circuits to realize the test signal generator and sample circuits. The paper is concluded with the application of the proposed approach to a circuit found in the work of Epstein et al and two ITC 97 analog benchmark circuits.
NASA Astrophysics Data System (ADS)
Alerskans, Emy; Kaas, Eigil
2016-04-01
In semi-Lagrangian models used for climate and NWP the trajectories are normally/often determined kinematically. Here we propose a new method for calculating trajectories in a more dynamically consistent way by pre-integrating the governing equations in a pseudo-Lagrangian manner using a short time step. Only non-advective adiabatic terms are included in this calculation, i.e., the Coriolis and pressure gradient force plus gravity in the momentum equations, and the divergence term in the continuity equation. This integration is performed with a forward-backward time step. Optionally, the tendencies are filtered with a local space filter, which reduces the phase speed of short wave gravity and sound waves. The filter relaxes the time step limitation related to high frequency oscillations without compromising locality of the solution. The filter can be considered as an alternative to less local or global semi-implicit solvers. Once trajectories are estimated over a complete long advective time step the full set of governing equations is stepped forward using these trajectories in combination with a flux form semi-Lagrangian formulation of the equations. The methodology is designed to improve consistency and scalability on massively parallel systems, although here it has only been verified that the technique produces realistic results in a shallow water model and a 2D model based on the full Euler equations.
NASA Astrophysics Data System (ADS)
Peltier, W.
2006-05-01
It has recently been suggested in Mitrovica, Wahr et al.(2005. Geophys. J. Int. 161, 491-506) that the theory previously developed to predict the Earth's rotational response to the Late Quaternary glaciation-deglaciation cycle may require modification. This theory was initially described in Peltier (1982, Advances in Geophysics 24, 1-146) and in Wu and Peltier (1984, Geophys. J. R. astr. Soc. 76, 202-242). Its importance for understanding the GIA contribution to the modern rate of geoid height time dependence that is currently being measured by the GRACE satellite system lies in the fact that the polar wander induced by the ice-age cycle contributes to this field in an important way. It has proven possible to test the quality of the original form of the theory in a definitive way by employing Holocene inferences of relative sea level history based upon radio- carbon dated sea level index points. This test relies upon data from a wide range of sites on the Earth's surface, sites located in regions that are expcted to be most strongly influenced by the feedback of the polar wander component of the Earth's rotatonal response to the glaciation cycle onto sea level history itself. Application of the test demonstrates that the claims made in the Mitrovica, Wahr et al. paper concerning the existence of a flaw in the theory are incorrect. The previously published ICE-5G(VM2)prediction of the expected geoid height time dependence due to the GIA process is therefor secure (see Peltier, 2005. QSR 24, 1655- 1671).
Pimazoni-Netto, Augusto; Zanella, Maria Teresa
2014-11-01
Clinical inertia and poor knowledge by many physicians play an important role in delaying diabetes control. Among other guidelines, the Position Statement of the American Diabetes Association/European Association for the Study of Diabetes on Management of Hyperglycemia in Type 2 Diabetes is a respected guideline with high impact on this subject in terms of influencing physicians in the definition of strategic approach to overcome poor glycemic control. But, on the other hand, it carries a recommendation that might contribute to clinical inertia because it can delay the needed implementation of more vigorous, intensive, and effective strategies to overcome poor glycemic control within a reasonable time frame during the evolution of the disease. The same is true with other respected algorithms from different diabetes associations. Together with pharmacological interventions, diabetes education and more intensive blood glucose monitoring in the initial phases after the diagnosis are key strategies for the effective control of diabetes. The main reason why a faster glycemic control should be implemented in an effective and safe way is to boost the confidence and the compliance of the patient to the recommendations of the diabetes care team. Better and faster results in glycemic control can only be safely achieved with educational strategies, structured self-monitoring of blood glucose, and adequate pharmacological therapy in the majority of cases. PMID:24892463
Adjustment to Subtle Time Constraints and Power Law Learning in Rapid Serial Visual Presentation.
Shin, Jacqueline C; Chang, Seah; Cho, Yang Seok
2015-01-01
We investigated whether attention could be modulated through the implicit learning of temporal information in a rapid serial visual presentation (RSVP) task. Participants identified two target letters among numeral distractors. The stimulus-onset asynchrony immediately following the first target (SOA1) varied at three levels (70, 98, and 126 ms) randomly between trials or fixed within blocks of trials. Practice over 3 consecutive days resulted in a continuous improvement in the identification rate for both targets and attenuation of the attentional blink (AB), a decrement in target (T2) identification when presented 200-400 ms after another target (T1). Blocked SOA1s led to a faster rate of improvement in RSVP performance and more target order reversals relative to random SOA1s, suggesting that the implicit learning of SOA1 positively affected performance. The results also reveal "power law" learning curves for individual target identification as well as the reduction in the AB decrement. These learning curves reflect the spontaneous emergence of skill through subtle attentional modulations rather than general attentional distribution. Together, the results indicate that implicit temporal learning could improve high level and rapid cognitive processing and highlights the sensitivity and adaptability of the attentional system to subtle constraints in stimulus timing. PMID:26635662
Adjustment to Subtle Time Constraints and Power Law Learning in Rapid Serial Visual Presentation
Shin, Jacqueline C.; Chang, Seah; Cho, Yang Seok
2015-01-01
We investigated whether attention could be modulated through the implicit learning of temporal information in a rapid serial visual presentation (RSVP) task. Participants identified two target letters among numeral distractors. The stimulus-onset asynchrony immediately following the first target (SOA1) varied at three levels (70, 98, and 126 ms) randomly between trials or fixed within blocks of trials. Practice over 3 consecutive days resulted in a continuous improvement in the identification rate for both targets and attenuation of the attentional blink (AB), a decrement in target (T2) identification when presented 200–400 ms after another target (T1). Blocked SOA1s led to a faster rate of improvement in RSVP performance and more target order reversals relative to random SOA1s, suggesting that the implicit learning of SOA1 positively affected performance. The results also reveal “power law” learning curves for individual target identification as well as the reduction in the AB decrement. These learning curves reflect the spontaneous emergence of skill through subtle attentional modulations rather than general attentional distribution. Together, the results indicate that implicit temporal learning could improve high level and rapid cognitive processing and highlights the sensitivity and adaptability of the attentional system to subtle constraints in stimulus timing. PMID:26635662
Inducible, Dose-Adjustable and Time-Restricted Reconstitution of Stat1 Deficiency In Vivo
Leitner, Nicole R.; Lassnig, Caroline; Rom, Rita; Heider, Susanne; Bago-Horvath, Zsuzsanna; Eferl, Robert; Müller, Simone; Kolbe, Thomas; Kenner, Lukas; Rülicke, Thomas; Strobl, Birgit; Müller, Mathias
2014-01-01
Signal transducer and activator of transcription (STAT) 1 is a key player in interferon (IFN) signaling, essential in mediating host defense against viruses and other pathogens. STAT1 levels are tightly regulated and loss- or gain-of-function mutations in mice and men lead to severe diseases. We have generated a doxycycline (dox) -inducible, FLAG-tagged Stat1 expression system in mice lacking endogenous STAT1 (i.e. Stat1ind mice). We show that STAT1 expression depends on the time and dose of dox treatment in primary cells and a variety of organs isolated from Stat1ind mice. In bone marrow-derived macrophages, a fraction of the amount of STAT1 present in WT cells is sufficient for full expression of IFN-induced genes. Dox-induced STAT1 established protection against virus infections in primary cells and mice. The availability of the Stat1ind mouse model will enable an examination of the consequences of variable amounts of STAT1. The model will also permit the study of STAT1 dose-dependent and reversible functions as well as of STAT1's contributions to the development, progression and resolution of disease. PMID:24489749
Corrales, Louis R.; Devanathan, Ram
2006-09-01
Non-equilibrium molecular dynamics simulation trajectories must in principle conserve energy along the entire path. Processes exist in high-energy primary knock-on atom cascades that can affect the energy conservation, specifically during the ballistic phase where collisions bring atoms into very close proximities. The solution, in general, is to reduce the time step size of the simulation. This work explores the effects of variable time step algorithms and the effects of specifying a maximum displacement. The period of the ballistic phase can be well characterized by methods developed in this work to monitor the kinetic energy dissipation during a high-energy cascade.
ERIC Educational Resources Information Center
Thompson, Ron; Robinson, Denise
2008-01-01
The unprecedented degree of attention given to the learning and skills sector in England by successive New Labour governments has led to a significant increase in what is expected of the teaching workforce. To help meet these expectations, a "step change" in the quality of initial teacher training for the sector is promised, alongside provisions…
BIOMAP A Daily Time Step, Mechanistic Model for the Study of Ecosystem Dynamics
NASA Astrophysics Data System (ADS)
Wells, J. R.; Neilson, R. P.; Drapek, R. J.; Pitts, B. S.
2010-12-01
of both climate and ecosystems must be done at coarse grid resolutions; smaller domains require higher resolution for the simulation of natural resource processes at the landscape scale and that of on-the-ground management practices. Via a combined multi-agency and private conservation effort we have implemented a Nested Scale Experiment (NeScE) that ranges from 1/2 degree resolution (global, ca. 50 km) to ca. 8km (North America) and 800 m (conterminous U.S.). Our first DGVM, MC1, has been implemented at all 3 scales. We are just beginning to implement BIOMAP into NeScE, with its unique features, and daily time step, as a counterpoint to MC1. We believe it will be more accurate at all resolutions providing better simulations of vegetation distribution, carbon balance, runoff, fire regimes and drought impacts.
Multi-Step Ahead Predictions for Critical Levels in Physiological Time Series.
ElMoaqet, Hisham; Tilbury, Dawn M; Ramachandran, Satya Krishna
2016-07-01
Standard modeling and evaluation methods have been classically used in analyzing engineering dynamical systems where the fundamental problem is to minimize the (mean) error between the real and predicted systems. Although these methods have been applied to multi-step ahead predictions of physiological signals, it is often more important to predict clinically relevant events than just to match these signals. Adverse clinical events, which occur after a physiological signal breaches a clinically defined critical threshold, are a popular class of such events. This paper presents a framework for multi-step ahead predictions of critical levels of abnormality in physiological signals. First, a performance metric is presented for evaluating multi-step ahead predictions. Then, this metric is used to identify personalized models optimized with respect to predictions of critical levels of abnormality. To address the paucity of adverse events, weighted support vector machines and cost-sensitive learning are used to optimize the proposed framework with respect to statistical metrics that can take into account the relative rarity of such events. PMID:27244754
Energy Science and Technology Software Center (ESTSC)
2014-06-01
ARKode is part of a software family called SUNDIALS: SUite of Nonlinear and Differential/ALgebraic equation Solvers [1]. The ARKode solver library provides an adaptive-step time integration package for stiff, nonstiff and multi-rate systems of ordinary differential equations (ODEs) using Runge Kutta methods [2].
Li, Chunlin; Zhou, Lizhi; Xu, Li; Zhao, Niannian; Beauchamp, Guy
2015-01-01
Due to loss and degradation of natural wetlands, waterbirds increasingly rely on surrounding human-dominated habitats to obtain food. Quantifying vigilance patterns, investigating the trade-off among various activities, and examining the underlying mechanisms will help us understand how waterbirds adapt to human-caused disturbances. During two successive winters (November-February of 2012-13 and 2013-14), we studied the hooded crane, Grus monacha, in the Shengjin Lake National Nature Reserve (NNR), China, to investigate how the species responds to human disturbances through vigilance and activity time-budget adjustments. Our results showed striking differences in the behavior of the cranes when foraging in the highly disturbed rice paddy fields found in the buffer zone compared with the degraded natural wetlands in the core area of the NNR. Time spent vigilant decreased with flock size and cranes spent more time vigilant in the human-dominated buffer zone. In the rice paddy fields, the birds were more vigilant but also fed more at the expense of locomotion and maintenance activities. Adult cranes spent more time vigilant and foraged less than juveniles. We recommend habitat recovery in natural wetlands and community co-management in the surrounding human-dominated landscape for conservation of the hooded crane and, generally, for the vast numbers of migratory waterbirds wintering in the middle and lower reaches of the Yangtze River floodplain. PMID:25768111
Li, Chunlin; Zhou, Lizhi; Xu, Li; Zhao, Niannian; Beauchamp, Guy
2015-01-01
Due to loss and degradation of natural wetlands, waterbirds increasingly rely on surrounding human-dominated habitats to obtain food. Quantifying vigilance patterns, investigating the trade-off among various activities, and examining the underlying mechanisms will help us understand how waterbirds adapt to human-caused disturbances. During two successive winters (November-February of 2012–13 and 2013–14), we studied the hooded crane, Grus monacha, in the Shengjin Lake National Nature Reserve (NNR), China, to investigate how the species responds to human disturbances through vigilance and activity time-budget adjustments. Our results showed striking differences in the behavior of the cranes when foraging in the highly disturbed rice paddy fields found in the buffer zone compared with the degraded natural wetlands in the core area of the NNR. Time spent vigilant decreased with flock size and cranes spent more time vigilant in the human-dominated buffer zone. In the rice paddy fields, the birds were more vigilant but also fed more at the expense of locomotion and maintenance activities. Adult cranes spent more time vigilant and foraged less than juveniles. We recommend habitat recovery in natural wetlands and community co-management in the surrounding human-dominated landscape for conservation of the hooded crane and, generally, for the vast numbers of migratory waterbirds wintering in the middle and lower reaches of the Yangtze River floodplain. PMID:25768111
Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.
2014-01-15
Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very
NASA Astrophysics Data System (ADS)
Chen, Chun-Jung; Chang, Allen Y.; Tsai, Chang-Lung; Lee, Chih-Jen; Chou, Li-Ping; Shin, Tien-Hao
2012-04-01
A modified Waveform Relaxation algorithm with transmission line calculation ability is proposed to perform large-scale circuit simulation for MOSFET circuits with lossy coupled transmission lines. The adopted full time-domain transmission line calculation algorithm, based on the Method of Characteristic, has been equipped with a time step control scheme to improve the calculation efficiency. All proposed methods have been implemented in a simulation program to simulate several circuits. The simulation results well justify the success of proposed methods.
NASA Technical Reports Server (NTRS)
Rosenfeld, Moshe
1990-01-01
The development, validation and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems are discussed. A solution method that combines a finite-volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries was previously developed for fixed-grids. In the present research effort, this solution method is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.
NASA Astrophysics Data System (ADS)
Tsekouras, Konstantinos; Presse, Steve
Counting of photobleaching steps is of importance in the investigation of many open problems in biophysics. Current methods of counting photo- bleaching steps cannot directly account for fluorophore photophysical behaviors such as fluorophore self-quenching, blinking and flickering. Our Bayesian approach to the counting problem allows for fluorophore blinking and reactivation as well as for multiple simultaneous photobleaching events and is neither computational resource- nor time- heavy. We detail the method's applicability and limitations and present examples of application in photobleach event counting.
A dissociative fluorescence enhancement technique for one-step time-resolved immunoassays
Mukkala, Veli-Matti; Hakala, Harri H. O.; Mäkinen, Pauliina H.; Suonpää, Mikko U.; Hemmilä, Ilkka A.
2010-01-01
The limitation of current dissociative fluorescence enhancement techniques is that the lanthanide chelate structures used as molecular probes are not stable enough in one-step assays with high concentrations of complexones or metal ions in the reaction mixture since these substances interfere with lanthanide chelate conjugated to the detector molecule. Lanthanide chelates of diethylenetriaminepentaacetic acid (DTPA) are extremely stable, and we used EuDTPA derivatives conjugated to antibodies as tracers in one-step immunoassays containing high concentrations of complexones or metal ions. Enhancement solutions based on different β-diketones were developed and tested for their fluorescence-enhancing capability in immunoassays with EuDTPA-labelled antibodies. Characteristics tested were fluorescence intensity, analytical sensitivity, kinetics of complex formation and signal stability. Formation of fluorescent complexes is fast (5 min) in the presented enhancement solution with EuDTPA probes withstanding strong complexones (ethylenediaminetetra acetate (EDTA) up to 100 mM) or metal ions (up to 200 μM) in the reaction mixture, the signal is intensive, stable for 4 h and the analytical sensitivity with Eu is 40 fmol/L, Tb 130 fmol/L, Sm 2.1 pmol/L and Dy 8.5 pmol/L. With the improved fluorescence enhancement technique, EDTA and citrate plasma samples as well as samples containing relatively high concentrations of metal ions can be analysed using a one-step immunoassay format also at elevated temperatures. It facilitates four-plexing, is based on one chelate structure for detector molecule labelling and is suitable for immunoassays due to the wide dynamic range and the analytical sensitivity. Figure PMID:21161513
Lockwood, Ashley M; Perez, Katherine K; Musick, William L; Ikwuagwu, Judy O; Attia, Engie; Fasoranti, Oyejoke O; Cernoch, Patricia L; Olsen, Randall J; Musser, James M
2016-04-01
OBJECTIVE To assess the impact of Matrix-Assisted Laser Desorption/Ionization Time-of-Flight (MALDI-TOF) mass spectrometry for rapid pathogen identification directly from early-positive blood cultures coupled with an antimicrobial stewardship program (ASP) in two community hospitals. Process measures and outcomes prior and after implementation of MALDI-TOF/ASP were evaluated. DESIGN Multicenter retrospective study. SETTING Two community hospitals in a system setting, Houston Methodist (HM) Sugar Land Hospital (235 beds) or HM Willowbrook Hospital (241 beds). PATIENTS Patients ≥ 18 years of age with culture-proven Gram-negative bacteremia. INTERVENTION Blood cultures from both hospitals were sent to and processed at our central microbiology laboratory. Clinical pharmacists at respective hospitals were notified of pathogen ID and susceptibility results. RESULTS We evaluated 572 patients for possible inclusion. After pre-defined exclusion criteria, 151 patients were included in the pre-intervention group and 242 were included in the intervention group. After MALDI-TOF/ASP implementation, the mean identification time after culture positivity was significantly reduced from 32 hours (±16 hours) to 6.5 hours (±5.4 hours) (P<.001); mean time to susceptibility results was significantly reduced from 48 (±22) hours to 23 (±14) hours (P<.001); and time to therapy adjustment was significantly reduced from 75 (±59) hours to 30 (±30) hours (P<.001). Mean hospital costs per patient were $3,411 less in the intervention group compared with the pre-intervention group ($18,645 vs $15,234; P=.04). CONCLUSION This study is the first to analyze the impact of MALDI-TOF coupled with an ASP in a community hospital setting. Time to results significantly differed with the use of MALDI-TOF, and time to appropriate therapy was significantly improved with the addition of ASP. PMID:26738993
Kasa, Sawinee; Faksri, Kiatichai; Kaewkes, Wanlop; Lulitanond, Viraphong; Namwat, Wises
2015-01-01
Mycobacterium tuberculosis (M. tb) is a causative agent of tuberculosis, a worldwide public health problem. In recent years, the incidence of human mycobacterial infection due to species other than M. tb has increased. However, the lack of specific, rapid, and inexpensive methods for identification of mycobacterial species remains a pressing problem. A diagnostic test was developed for mycobacterial strain differentiation utilizing a double-step multiplex real time PCR together with melting curve analysis for identifying and distinguishing among M. tb, M. bovis BCG, other members of M. tb. complex, M. avium, and non-tuberculosis mycobacteria. The assay was tested using 167 clinical sputum samples in comparison with acid-fast staining and culturing. Using only the first step (step A) the assay achieved sensitivity and specificity of 81% and 95%, respectively. The detection limit was equivalent to 50 genome copies. PMID:26513906
Kuroda, Yoshihiro; Nisky, Ilana; Uranishi, Yuki; Imura, Masataka; Okamura, Allison M; Oshiro, Osamu
2013-01-01
We present a novel algorithm for real-time detection of the onset of surface electromyography signal in step-tracking wrist movements. The method identifies abrupt increase of the quasi-tension signal calculated from sEMG resulting from the step-by-step recruitment of activated motor units. We assessed the performance of our proposed algorithm using both simulated and real sEMG signals, and compared with two existing detection methods. Evaluation with simulated sEMG showed that the detection accuracy of our method is robust to different signal-to-noise ratios, and that it outperforms the existing methods in terms of bias when the noise is large (low SNR). Evaluation with real sEMG analysis also indicated better detection performance compared to existing methods. PMID:24110123
Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback
NASA Astrophysics Data System (ADS)
Zhang, Wenle; Liu, Jianchang
2016-04-01
This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.
Sawicki, Gregory S; Robertson, Benjamin D; Azizi, Emanuel; Roberts, Thomas J
2015-10-01
A growing body of research on the mechanics and energetics of terrestrial locomotion has demonstrated that elastic elements acting in series with contracting muscle are critical components of sustained, stable and efficient gait. Far fewer studies have examined how the nervous system modulates muscle-tendon interaction dynamics to optimize 'tuning' or meet varying locomotor demands. To explore the fundamental neuromechanical rules that govern the interactions between series elastic elements (SEEs) and contractile elements (CEs) within a compliant muscle-tendon unit (MTU), we used a novel work loop approach that included implanted sonomicrometry crystals along muscle fascicles. This enabled us to decouple CE and SEE length trajectories when cyclic strain patterns were applied to an isolated plantaris MTU from the bullfrog (Lithobates catesbeianus). Using this approach, we demonstrate that the onset timing of muscle stimulation (i.e. stimulation phase) that involves a symmetrical MTU stretch-shorten cycle during active force production results in net zero mechanical power output, and maximal decoupling of CE and MTU length trajectories. We found it difficult to 'tune' the muscle-tendon system for strut-like isometric force production by adjusting stimulation phase only, as the zero power output condition involved significant positive and negative mechanical work by the CE. A simple neural mechanism - adjusting muscle stimulation phase - could shift an MTU from performing net zero to net positive (energy producing) or net negative (energy absorbing) mechanical work under conditions of changing locomotor demand. Finally, we show that modifications to the classical work loop paradigm better represent in vivo muscle-tendon function during locomotion. PMID:26232413
NASA Technical Reports Server (NTRS)
Janus, J. Mark; Whitfield, David L.
1990-01-01
Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.
NASA Astrophysics Data System (ADS)
Li, H.; Zhang, Z.; Chen, X.
2012-12-01
It is widely accepted that they are oversampled in spatial grid spacing and temporal time step in the high speed medium if uniform grids are used for the numerical simulation. This oversampled grid spacing and time step will lower the efficiency of the calculation, especially high velocity contrast exists. Based on the collocated-grid finite-difference method (FDM), we present an algorithm of spatial discontinuous grid, with localized grid blocks and locally varying time steps, which will increase the efficiency of simulation of seismic wave propagation and earthquake strong ground motion. According to the velocity structure, we discretize the model into discontinuous grid blocks, and the time step of each block is determined according to the local stability. The key problem of the discontinuous grid method is the connection between grid blocks with different grid spacing. We use a transitional area overlapped by both of the finer and the coarser grids to deal with the problem. In the transitional area, the values of finer ghost points are obtained by interpolation from the coarser grid in space and time domain, while the values of coarser ghost points are obtained by downsampling from the finer grid. How to deal with coarser ghost points can influent the stability of long time simulation. After testing different downsampling methods and finally we choose the Gaussian filtering. Basically, 4th order Rung-Kutta scheme will be used for the time integral for our numerical method. For our discontinuous grid FDM, discontinuous time steps for the coarser and the finer grids will be used to increase the simulation efficiency. Numerical tests indicate that our method can provide a stable solution even for the long time simulation without any additional filtration for grid spacing ratio n=2. And for larger grid spacing ratio, Gaussian filtration could be used to preserve the stability. With the collocated-grid FDM, which is flexible and accurate in implementation of free
NASA Astrophysics Data System (ADS)
Zasche, P.
2016-03-01
An easy step-by-step manual of PHOEBE is presented. It should serve as a starting point for the first time users of PHOEBE analyzing the eclipsing binary light curve. It is demonstrated on one particular detached system also with the downloadable data and the whole procedure is described easily till the final trustworthy fit is being reached.
Finite-difference modeling with variable grid-size and adaptive time-step in porous media
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yin, Xingyao; Wu, Guochen
2014-04-01
Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.