Sample records for variable time steps

  1. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis

    PubMed Central

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls. PMID:28700633

  2. Effects of the lateral amplitude and regularity of upper body fluctuation on step time variability evaluated using return map analysis.

    PubMed

    Chidori, Kazuhiro; Yamamoto, Yuji

    2017-01-01

    The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.

  3. Influence of an irregular surface and low light on the step variability of patients with peripheral neuropathy during level gait.

    PubMed

    Thies, Sibylle B; Richardson, James K; Demott, Trina; Ashton-Miller, James A

    2005-08-01

    Patients with peripheral neuropathy (PN) report greater difficulty walking on irregular surfaces with low light (IL) than on flat surfaces with regular lighting (FR). We tested the primary hypothesis that older PN patients would demonstrate greater step width and step width variability under IL conditions than under FR conditions. Forty-two subjects (22 male, 20 female: mean +/- S.D.: 64.7 +/- 9.8 years) with PN underwent history, physical examination, and electrodiagnostic testing. Subjects were asked to walk 10 m at a comfortable speed while kinematic and force data were measured at 100 Hz using optoelectronic markers and foot switches. Ten trials were conducted under both IL and FR conditions. Step width, time, length, and speed were calculated with a MATLAB algorithm, with the standard deviation serving as the measure of variability. The results showed that under IL, as compared to FR, conditions subjects demonstrated greater step width (197.1 +/- 40.8 mm versus 180.5 +/- 32.4 mm; P < 0.001) and step width variability (40.4 +/- 9.0 mm versus 34.5 +/- 8.4 mm; P < 0.001), step time and its variability (P < 0.001 and P = 0.003, respectively), and step length variability (P < 0.001). Average step length and gait speed decreased under IL conditions (P < 0.001 for both). Step width variability and step time variability correlated best under IL conditions with a clinical measure of PN severity and fall history, respectively. We conclude that IL conditions cause PN patients to increase the variability of their step width and other gait parameters.

  4. Patients with Chronic Obstructive Pulmonary Disease Walk with Altered Step Time and Step Width Variability as Compared with Healthy Control Subjects.

    PubMed

    Yentes, Jennifer M; Rennard, Stephen I; Schmid, Kendra K; Blanke, Daniel; Stergiou, Nicholas

    2017-06-01

    Compared with control subjects, patients with chronic obstructive pulmonary disease (COPD) have an increased incidence of falls and demonstrate balance deficits and alterations in mediolateral trunk acceleration while walking. Measures of gait variability have been implicated as indicators of fall risk, fear of falling, and future falls. To investigate whether alterations in gait variability are found in patients with COPD as compared with healthy control subjects. Twenty patients with COPD (16 males; mean age, 63.6 ± 9.7 yr; FEV 1 /FVC, 0.52 ± 0.12) and 20 control subjects (9 males; mean age, 62.5 ± 8.2 yr) walked for 3 minutes on a treadmill while their gait was recorded. The amount (SD and coefficient of variation) and structure of variability (sample entropy, a measure of regularity) were quantified for step length, time, and width at three walking speeds (self-selected and ±20% of self-selected speed). Generalized linear mixed models were used to compare dependent variables. Patients with COPD demonstrated increased mean and SD step time across all speed conditions as compared with control subjects. They also walked with a narrower step width that increased with increasing speed, whereas the healthy control subjects walked with a wider step width that decreased as speed increased. Further, patients with COPD demonstrated less variability in step width, with decreased SD, compared with control subjects at all three speed conditions. No differences in regularity of gait patterns were found between groups. Patients with COPD walk with increased duration of time between steps, and this timing is more variable than that of control subjects. They also walk with a narrower step width in which the variability of the step widths from step to step is decreased. Changes in these parameters have been related to increased risk of falling in aging research. This provides a mechanism that could explain the increased prevalence of falls in patients with COPD.

  5. Variability of Anticipatory Postural Adjustments During Gait Initiation in Individuals With Parkinson Disease.

    PubMed

    Lin, Cheng-Chieh; Creath, Robert A; Rogers, Mark W

    2016-01-01

    In people with Parkinson disease (PD), difficulties with initiating stepping may be related to impairments of anticipatory postural adjustments (APAs). Increased variability in step length and step time has been observed in gait initiation in individuals with PD. In this study, we investigated whether the ability to generate consistent APAs during gait initiation is compromised in these individuals. Fifteen subjects with PD and 8 healthy control subjects were instructed to take rapid forward steps after a verbal cue. The changes in vertical force and ankle marker position were recorded via force platforms and a 3-dimensional motion capture system, respectively. Means, standard deviations, and coefficients of variation of both timing and magnitude of vertical force, as well as stepping variables, were calculated. During the postural phase of gait initiation the interval was longer and the force modulation was smaller in subjects with PD. Both the variability of timing and force modulation were larger in subjects with PD. Individuals with PD also had a longer time to complete the first step, but no significant differences were found for the variability of step time, length, and speed between groups. The increased variability of APAs during gait initiation in subjects with PD could affect posture-locomotion coupling, and lead to start hesitation, and even falls. Future studies are needed to investigate the effect of rehabilitation interventions on the variability of APAs during gait initiation in individuals with PD.Video abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A119).

  6. Immediate Effects of Clock-Turn Strategy on the Pattern and Performance of Narrow Turning in Persons With Parkinson Disease.

    PubMed

    Yang, Wen-Chieh; Hsu, Wei-Li; Wu, Ruey-Meei; Lin, Kwan-Hwa

    2016-10-01

    Turning difficulty is common in people with Parkinson disease (PD). The clock-turn strategy is a cognitive movement strategy to improve turning performance in people with PD despite its effects are unverified. Therefore, this study aimed to investigate the effects of the clock-turn strategy on the pattern of turning steps, turning performance, and freezing of gait during a narrow turning, and how these effects were influenced by concurrent performance of a cognitive task (dual task). Twenty-five people with PD were randomly assigned to the clock-turn or usual-turn group. Participants performed the Timed Up and Go test with and without concurrent cognitive task during the medication OFF period. The clock-turn group performed the Timed Up and Go test using the clock-turn strategy, whereas participants in the usual-turn group performed in their usual manner. Measurements were taken during the 180° turn of the Timed Up and Go test. The pattern of turning steps was evaluated by step time variability and step time asymmetry. Turning performance was evaluated by turning time and number of turning steps. The number and duration of freezing of gait were calculated by video review. The clock-turn group had lower step time variability and step time asymmetry than the usual-turn group. Furthermore, the clock-turn group turned faster with fewer freezing of gait episodes than the usual-turn group. Dual task increased the step time variability and step time asymmetry in both groups but did not affect turning performance and freezing severity. The clock-turn strategy reduces turning time and freezing of gait during turning, probably by lowering step time variability and asymmetry. Dual task compromises the effects of the clock-turn strategy, suggesting a competition for attentional resources.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A141).

  7. Step-to-step spatiotemporal variables and ground reaction forces of intra-individual fastest sprinting in a single session.

    PubMed

    Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo

    2018-06-01

    We aimed to investigate the step-to-step spatiotemporal variables and ground reaction forces during the acceleration phase for characterising intra-individual fastest sprinting within a single session. Step-to-step spatiotemporal variables and ground reaction forces produced by 15 male athletes were measured over a 50-m distance during repeated (three to five) 60-m sprints using a long force platform system. Differences in measured variables between the fastest and slowest trials were examined at each step until the 22nd step using a magnitude-based inferences approach. There were possibly-most likely higher running speed and step frequency (2nd to 22nd steps) and shorter support time (all steps) in the fastest trial than in the slowest trial. Moreover, for the fastest trial there were likely-very likely greater mean propulsive force during the initial four steps and possibly-very likely larger mean net anterior-posterior force until the 17th step. The current results demonstrate that better sprinting performance within a single session is probably achieved by 1) a high step frequency (except the initial step) with short support time at all steps, 2) exerting a greater mean propulsive force during initial acceleration, and 3) producing a greater mean net anterior-posterior force during initial and middle acceleration.

  8. Two Independent Contributions to Step Variability during Over-Ground Human Walking

    PubMed Central

    Collins, Steven H.; Kuo, Arthur D.

    2013-01-01

    Human walking exhibits small variations in both step length and step width, some of which may be related to active balance control. Lateral balance is thought to require integrative sensorimotor control through adjustment of step width rather than length, contributing to greater variability in step width. Here we propose that step length variations are largely explained by the typical human preference for step length to increase with walking speed, which itself normally exhibits some slow and spontaneous fluctuation. In contrast, step width variations should have little relation to speed if they are produced more for lateral balance. As a test, we examined hundreds of overground walking steps by healthy young adults (N = 14, age < 40 yrs.). We found that slow fluctuations in self-selected walking speed (2.3% coefficient of variation) could explain most of the variance in step length (59%, P < 0.01). The residual variability not explained by speed was small (1.5% coefficient of variation), suggesting that step length is actually quite precise if not for the slow speed fluctuations. Step width varied over faster time scales and was independent of speed fluctuations, with variance 4.3 times greater than that for step length (P < 0.01) after accounting for the speed effect. That difference was further magnified by walking with eyes closed, which appears detrimental to control of lateral balance. Humans appear to modulate fore-aft foot placement in precise accordance with slow fluctuations in walking speed, whereas the variability of lateral foot placement appears more closely related to balance. Step variability is separable in both direction and time scale into balance- and speed-related components. The separation of factors not related to balance may reveal which aspects of walking are most critical for the nervous system to control. PMID:24015308

  9. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  10. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  11. Neural correlates of gait variability in people with multiple sclerosis with fall history.

    PubMed

    Kalron, Alon; Allali, Gilles; Achiron, Anat

    2018-05-28

    Investigate the association between step time variability and related brain structures in accordance with fall status in people with multiple sclerosis (PwMS). The study included 225 PwMS. A whole-brain MRI was performed by a high-resolution 3.0-Telsa MR scanner in addition to volumetric analysis based on 3D T1-weighted images using the FreeSurfer image analysis suite. Step time variability was measured by an electronic walkway. Participants were defined as "fallers" (at least two falls during the previous year) and "non-fallers". One hundred and five PwMS were defined as fallers and had a greater step time variability compared to non-fallers (5.6% (S.D.=3.4) vs. 3.4% (S.D.=1.5); p=0.001). MS fallers exhibited a reduced volume in the left caudate and both cerebellum hemispheres compared to non-fallers. By using a linear regression analysis no association was found between gait variability and related brain structures in the total cohort and non-fallers group. However, the analysis found an association between the left hippocampus and left putamen volumes with step time variability in the faller group; p=0.031, 0.048, respectively, controlling for total cranial volume, walking speed, disability, age and gender. Nevertheless, according to the hierarchical regression model, the contribution of these brain measures to predict gait variability was relatively small compared to walking speed. An association between low left hippocampal, putamen volumes and step time variability was found in PwMS with a history of falls, suggesting brain structural characteristics may be related to falls and increased gait variability in PwMS. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. Choice of Variables and Preconditioning for Time Dependent Problems

    NASA Technical Reports Server (NTRS)

    Turkel, Eli; Vatsa, Verr N.

    2003-01-01

    We consider the use of low speed preconditioning for time dependent problems. These are solved using a dual time step approach. We consider the effect of this dual time step on the parameter of the low speed preconditioning. In addition, we compare the use of two sets of variables, conservation and primitive variables, to solve the system. We show the effect of these choices on both the convergence to a steady state and the accuracy of the numerical solutions for low Mach number steady state and time dependent flows.

  13. Representation of solution for fully nonlocal diffusion equations with deviation time variable

    NASA Astrophysics Data System (ADS)

    Drin, I. I.; Drin, S. S.; Drin, Ya. M.

    2018-01-01

    We prove the solvability of the Cauchy problem for a nonlocal heat equation which is of fractional order both in space and time. The representation formula for classical solutions for time- and space- fractional partial differential operator Dat + a2 (-Δ) γ/2 (0 <= α <= 1, γ ɛ (0, 2]) and deviation time variable is given in terms of the Fox H-function, using the step by step method.

  14. The precision of locomotor odometry in humans.

    PubMed

    Durgin, Frank H; Akagi, Mikio; Gallistel, Charles R; Haiken, Woody

    2009-03-01

    Two experiments measured the human ability to reproduce locomotor distances of 4.6-100 m without visual feedback and compared distance production with time production. Subjects were not permitted to count steps. It was found that the precision of human odometry follows Weber's law that variability is proportional to distance. The coefficients of variation for distance production were much lower than those measured for time production for similar durations. Gait parameters recorded during the task (average step length and step frequency) were found to be even less variable suggesting that step integration could be the basis for non-visual human odometry.

  15. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  16. Single step optimization of manipulator maneuvers with variable structure control

    NASA Technical Reports Server (NTRS)

    Chen, N.; Dwyer, T. A. W., III

    1987-01-01

    One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.

  17. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    NASA Astrophysics Data System (ADS)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  18. Effects of aging on the relationship between cognitive demand and step variability during dual-task walking.

    PubMed

    Decker, Leslie M; Cignetti, Fabien; Hunt, Nathaniel; Potter, Jane F; Stergiou, Nicholas; Studenski, Stephanie A

    2016-08-01

    A U-shaped relationship between cognitive demand and gait control may exist in dual-task situations, reflecting opposing effects of external focus of attention and attentional resource competition. The purpose of the study was twofold: to examine whether gait control, as evaluated from step-to-step variability, is related to cognitive task difficulty in a U-shaped manner and to determine whether age modifies this relationship. Young and older adults walked on a treadmill without attentional requirement and while performing a dichotic listening task under three attention conditions: non-forced (NF), forced-right (FR), and forced-left (FL). The conditions increased in their attentional demand and requirement for inhibitory control. Gait control was evaluated by the variability of step parameters related to balance control (step width) and rhythmic stepping pattern (step length and step time). A U-shaped relationship was found for step width variability in both young and older adults and for step time variability in older adults only. Cognitive performance during dual tasking was maintained in both young and older adults. The U-shaped relationship, which presumably results from a trade-off between an external focus of attention and competition for attentional resources, implies that higher-level cognitive processes are involved in walking in young and older adults. Specifically, while these processes are initially involved only in the control of (lateral) balance during gait, they become necessary for the control of (fore-aft) rhythmic stepping pattern in older adults, suggesting that attentional resources turn out to be needed in all facets of walking with aging. Finally, despite the cognitive resources required by walking, both young and older adults spontaneously adopted a "posture second" strategy, prioritizing the cognitive task over the gait task.

  19. Performance of an attention-demanding task during treadmill walking shifts the noise qualities of step-to-step variation in step width.

    PubMed

    Grabiner, Mark D; Marone, Jane R; Wyatt, Marilynn; Sessoms, Pinata; Kaufman, Kenton R

    2018-06-01

    The fractal scaling evident in the step-to-step fluctuations of stepping-related time series reflects, to some degree, neuromotor noise. The primary purpose of this study was to determine the extent to which the fractal scaling of step width, step width and step width variability are affected by performance of an attention-demanding task. We hypothesized that the attention-demanding task would shift the structure of the step width time series toward white, uncorrelated noise. Subjects performed two 10-min treadmill walking trials, a control trial of undisturbed walking and a trial during which they performed a mental arithmetic/texting task. Motion capture data was converted to step width time series, the fractal scaling of which were determined from their power spectra. Fractal scaling decreased by 22% during the texting condition (p < 0.001) supporting the hypothesized shift toward white uncorrelated noise. Step width and step width variability increased 19% and five percent, respectively (p < 0.001). However, a stepwise discriminant analysis to which all three variables were input revealed that the control and dual task conditions were discriminated only by step width fractal scaling. The change of the fractal scaling of step width is consistent with increased cognitive demand and suggests a transition in the characteristics of the signal noise. This may reflect an important advance toward the understanding of the manner in which neuromotor noise contributes to some types of falls. However, further investigation of the repeatability of the results, the sensitivity of the results to progressive increases in cognitive load imposed by attention-demanding tasks, and the extent to which the results can be generalized to the gait of older adults seems warranted. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Mind wandering at the fingertips: automatic parsing of subjective states based on response time variability

    PubMed Central

    Bastian, Mikaël; Sackur, Jérôme

    2013-01-01

    Research from the last decade has successfully used two kinds of thought reports in order to assess whether the mind is wandering: random thought-probes and spontaneous reports. However, none of these two methods allows any assessment of the subjective state of the participant between two reports. In this paper, we present a step by step elaboration and testing of a continuous index, based on response time variability within Sustained Attention to Response Tasks (N = 106, for a total of 10 conditions). We first show that increased response time variability predicts mind wandering. We then compute a continuous index of response time variability throughout full experiments and show that the temporal position of a probe relative to the nearest local peak of the continuous index is predictive of mind wandering. This suggests that our index carries information about the subjective state of the subject even when he or she is not probed, and opens the way for on-line tracking of mind wandering. Finally we proceed a step further and infer the internal attentional states on the basis of the variability of response times. To this end we use the Hidden Markov Model framework, which allows us to estimate the durations of on-task and off-task episodes. PMID:24046753

  1. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    PubMed

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  2. Assessment of power step performances of variable speed pump-turbine unit by means of hydro-electrical system simulation

    NASA Astrophysics Data System (ADS)

    Béguin, A.; Nicolet, C.; Hell, J.; Moreira, C.

    2017-04-01

    The paper explores the improvement in ancillary services that variable speed technologies can provide for the case of an existing pumped storage power plant of 2x210 MVA which conversion from fixed speed to variable speed is investigated with a focus on the power step performances of the units. First two motor-generator variable speed technologies are introduced, namely the Doubly Fed Induction Machine (DFIM) and the Full Scale Frequency Converter (FSFC). Then a detailed numerical simulation model of the investigated power plant used to simulate power steps response and comprising the waterways, the pump-turbine unit, the motor-generator, the grid connection and the control systems is presented. Hydroelectric system time domain simulations are performed in order to determine the shortest response time achievable, taking into account the constraints from the maximum penstock pressure and from the rotational speed limits. It is shown that the maximum instantaneous power step response up and down depends on the hydro-mechanical characteristics of the pump-turbine unit and of the motor-generator speed limits. As a results, for the investigated test case, the FSFC solution offer the best power step response performances.

  3. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  4. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  5. Different methods to analyze stepped wedge trial designs revealed different aspects of intervention effects.

    PubMed

    Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R

    2016-04-01

    Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Stepping motor controller

    DOEpatents

    Bourret, S.C.; Swansen, J.E.

    1982-07-02

    A stepping motor is microprocessor controlled by digital circuitry which monitors the output of a shaft encoder adjustably secured to the stepping motor and generates a subsequent stepping pulse only after the preceding step has occurred and a fixed delay has expired. The fixed delay is variable on a real-time basis to provide for smooth and controlled deceleration.

  7. Microphysical Timescales in Clouds and their Application in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xiping; Tao, Wei-Kuo; Simpson, Joanne

    2007-01-01

    Independent prognostic variables in cloud-resolving modeling are chosen on the basis of the analysis of microphysical timescales in clouds versus a time step for numerical integration. Two of them are the moist entropy and the total mixing ratio of airborne water with no contributions from precipitating particles. As a result, temperature can be diagnosed easily from those prognostic variables, and cloud microphysics be separated (or modularized) from moist thermodynamics. Numerical comparison experiments show that those prognostic variables can work well while a large time step (e.g., 10 s) is used for numerical integration.

  8. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    PubMed Central

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286

  9. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less

  10. On improving the iterative convergence properties of an implicit approximate-factorization finite difference algorithm. [considering transonic flow

    NASA Technical Reports Server (NTRS)

    Desideri, J. A.; Steger, J. L.; Tannehill, J. C.

    1978-01-01

    The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.

  11. Effects of dual task on turning ability in stroke survivors and older adults.

    PubMed

    Hollands, K L; Agnihotri, D; Tyson, S F

    2014-09-01

    Turning is an integral component of independent mobility in which stroke survivors frequently fall. This study sought to measure the effects of competing cognitive demands on the stepping patterns of stroke survivors, compared to healthy age-match adults, during turning as a putative mechanism for falls. Walking and turning (90°) was assessed under single (walking and turning alone) and dual task (subtracting serial 3s while walking and turning) conditions using an electronic, pressure-sensitive walkway. Dependent measures were time to turn, variability in time to turn, step length, step width and single support time during three steps of the turn. Turning ability in single and dual task conditions was compared between stroke survivors (n=17, mean ± SD: 59 ± 113 months post-stroke, 64 ± 10 years of age) and age-matched healthy counterparts (n=15). Both groups took longer, were more variable, tended to widen the second step and, crucially, increased single support time on the inside leg of the turn while turning and distracted. Increased single support time during turning may represent biomechanical mechanism, within stepping patterns of turning under distraction, for increased risk of falls for both stroke survivors and older adults. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  12. Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6

    NASA Astrophysics Data System (ADS)

    Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.

    2017-01-01

    This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.

  13. A successful backward step correlates with hip flexion moment of supporting limb in elderly people.

    PubMed

    Takeuchi, Yahiko

    2018-01-01

    The objective of this study was to determine the positional relationship between the center of mass (COM) and the center of pressure (COP) at the time of step landing, and to examine their relationship with the joint moments exerted by the supporting limb, with regard to factors of the successful backward step response. The study population comprised 8 community-dwelling elderly people that were observed to take successive multi steps after the landing of a backward stepping. Using a motion capture system and force plate, we measured the COM, COP and COM-COP deviation distance on landing during backward stepping. In addition, we measured the moment of the supporting limb joint during backward stepping. The multi-step data were compared with data from instances when only one step was taken (single-step). Variables that differed significantly between the single- and multi-step data were used as objective variables and the joint moments of the supporting limb were used as explanatory variables in single regression analyses. The COM-COP deviation in the anteroposterior was significantly larger in the single-step. A regression analysis with COM-COP deviation as the objective variable obtained a significant regression equation in the hip flexion moment (R2 = 0.74). The hip flexion moment of supporting limb was shown to be a significant explanatory variable in both the PS and SS phases for the relationship with COM-COP distance. This study found that to create an appropriate backward step response after an external disturbance (i.e. the ability to stop after 1 step), posterior braking of the COM by a hip flexion moment are important during the single-limbed standing phase.

  14. Predictive Variables of Half-Marathon Performance for Male Runners

    PubMed Central

    Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christoper; Rodríguez-Marroyo, José A.; García-López, Juan

    2017-01-01

    The aims of this study were to establish and validate various predictive equations of half-marathon performance. Seventy-eight half-marathon male runners participated in two different phases. Phase 1 (n = 48) was used to establish the equations for estimating half-marathon performance, and Phase 2 (n = 30) to validate these equations. Apart from half-marathon performance, training-related and anthropometric variables were recorded, and an incremental test on a treadmill was performed, in which physiological (VO2max, speed at the anaerobic threshold, peak speed) and biomechanical variables (contact and flight times, step length and step rate) were registered. In Phase 1, half-marathon performance could be predicted to 90.3% by variables related to training and anthropometry (Equation 1), 94.9% by physiological variables (Equation 2), 93.7% by biomechanical parameters (Equation 3) and 96.2% by a general equation (Equation 4). Using these equations, in Phase 2 the predicted time was significantly correlated with performance (r = 0.78, 0.92, 0.90 and 0.95, respectively). The proposed equations and their validation showed a high prediction of half-marathon performance in long distance male runners, considered from different approaches. Furthermore, they improved the prediction performance of previous studies, which makes them a highly practical application in the field of training and performance. Key points The present study obtained four equations involving anthropometric, training, physiological and biomechanical variables to estimate half-marathon performance. These equations were validated in a different population, demonstrating narrows ranges of prediction than previous studies and also their consistency. As a novelty, some biomechanical variables (i.e. step length and step rate at RCT, and maximal step length) have been related to half-marathon performance. PMID:28630571

  15. Efficient variable time-stepping scheme for intense field-atom interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Kosloff, R.

    1993-03-01

    The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less

  16. On the Existence of Step-To-Step Breakpoint Transitions in Accelerated Sprinting

    PubMed Central

    McGhie, David; Danielsen, Jørgen; Sandbakk, Øyvind; Haugen, Thomas

    2016-01-01

    Accelerated running is characterised by a continuous change of kinematics from one step to the next. It has been argued that breakpoints in the step-to-step transitions may occur, and that these breakpoints are an essential characteristic of dynamics during accelerated running. We examined this notion by comparing a continuous exponential curve fit (indicating continuity, i.e., smooth transitions) with linear piecewise fitting (indicating breakpoint). We recorded the kinematics of 24 well trained sprinters during a 25 m sprint run with start from competition starting blocks. Kinematic data were collected for 24 anatomical landmarks in 3D, and the location of centre of mass (CoM) was calculated from this data set. The step-to-step development of seven variables (four related to CoM position, and ground contact time, aerial time and step length) were analysed by curve fitting. In most individual sprints (in total, 41 sprints were successfully recorded) no breakpoints were identified for the variables investigated. However, for the mean results (i.e., the mean curve for all athletes) breakpoints were identified for the development of vertical CoM position, angle of acceleration and distance between support surface and CoM. It must be noted that for these variables the exponential fit showed high correlations (r2>0.99). No relationship was found between the occurrences of breakpoints for different variables as investigated using odds ratios (Mantel-Haenszel Chi-square statistic). It is concluded that although breakpoints regularly appear during accelerated running, these are not the rule and thereby unlikely a fundamental characteristic, but more likely an expression of imperfection of performance. PMID:27467387

  17. Geometric multigrid for an implicit-time immersed boundary method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.

    2014-10-12

    The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less

  18. Orbit and uncertainty propagation: a comparison of Gauss-Legendre-, Dormand-Prince-, and Chebyshev-Picard-based approaches

    NASA Astrophysics Data System (ADS)

    Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.

    2014-01-01

    We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.

  19. Metronome Cueing of Walking Reduces Gait Variability after a Cerebellar Stroke.

    PubMed

    Wright, Rachel L; Bevins, Joseph W; Pratt, David; Sackley, Catherine M; Wing, Alan M

    2016-01-01

    Cerebellar stroke typically results in increased variability during walking. Previous research has suggested that auditory cueing reduces excessive variability in conditions such as Parkinson's disease and post-stroke hemiparesis. The aim of this case report was to investigate whether the use of a metronome cue during walking could reduce excessive variability in gait parameters after a cerebellar stroke. An elderly female with a history of cerebellar stroke and recurrent falling undertook three standard gait trials and three gait trials with an auditory metronome. A Vicon system was used to collect 3-D marker trajectory data. The coefficient of variation was calculated for temporal and spatial gait parameters. SDs of the joint angles were calculated and used to give a measure of joint kinematic variability. Step time, stance time, and double support time variability were reduced with metronome cueing. Variability in the sagittal hip, knee, and ankle angles were reduced to normal values when walking to the metronome. In summary, metronome cueing resulted in a decrease in variability for step, stance, and double support times and joint kinematics. Further research is needed to establish whether a metronome may be useful in gait rehabilitation after cerebellar stroke and whether this leads to a decreased risk of falling.

  20. Metronome Cueing of Walking Reduces Gait Variability after a Cerebellar Stroke

    PubMed Central

    Wright, Rachel L.; Bevins, Joseph W.; Pratt, David; Sackley, Catherine M.; Wing, Alan M.

    2016-01-01

    Cerebellar stroke typically results in increased variability during walking. Previous research has suggested that auditory cueing reduces excessive variability in conditions such as Parkinson’s disease and post-stroke hemiparesis. The aim of this case report was to investigate whether the use of a metronome cue during walking could reduce excessive variability in gait parameters after a cerebellar stroke. An elderly female with a history of cerebellar stroke and recurrent falling undertook three standard gait trials and three gait trials with an auditory metronome. A Vicon system was used to collect 3-D marker trajectory data. The coefficient of variation was calculated for temporal and spatial gait parameters. SDs of the joint angles were calculated and used to give a measure of joint kinematic variability. Step time, stance time, and double support time variability were reduced with metronome cueing. Variability in the sagittal hip, knee, and ankle angles were reduced to normal values when walking to the metronome. In summary, metronome cueing resulted in a decrease in variability for step, stance, and double support times and joint kinematics. Further research is needed to establish whether a metronome may be useful in gait rehabilitation after cerebellar stroke and whether this leads to a decreased risk of falling. PMID:27313563

  1. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  2. 30 min of treadmill walking at self-selected speed does not increase gait variability in independent elderly.

    PubMed

    Da Rocha, Emmanuel S; Kunzler, Marcos R; Bobbert, Maarten F; Duysens, Jacques; Carpes, Felipe P

    2018-06-01

    Walking is one of the preferred exercises among elderly, but could a prolonged walking increase gait variability, a risk factor for a fall in the elderly? Here we determine whether 30 min of treadmill walking increases coefficient of variation of gait in elderly. Because gait responses to exercise depend on fitness level, we included 15 sedentary and 15 active elderly. Sedentary participants preferred a lower gait speed and made smaller steps than the actives. Step length coefficient of variation decreased ~16.9% by the end of the exercise in both the groups. Stride length coefficient of variation decreased ~9% after 10 minutes of walking, and sedentary elderly showed a slightly larger step width coefficient of variation (~2%) at 10 min than active elderly. Active elderly showed higher walk ratio (step length/cadence) than sedentary in all times of walking, but the times did not differ in both the groups. In conclusion, treadmill gait kinematics differ between sedentary and active elderly, but changes over time are similar in sedentary and active elderly. As a practical implication, 30 min of walking might be a good strategy of exercise for elderly, independently of the fitness level, because it did not increase variability in step and stride kinematics, which is considered a risk of fall in this population.

  3. Intra-individual variability in day-to-day and month-to-month measurements of physical activity and sedentary behaviour at work and in leisure-time among Danish adults.

    PubMed

    Pedersen, E S L; Danquah, I H; Petersen, C B; Tolstrup, J S

    2016-12-03

    Accelerometers can obtain precise measurements of movements during the day. However, the individual activity pattern varies from day-to-day and there is limited evidence on measurement days needed to obtain sufficient reliability. The aim of this study was to examine variability in accelerometer derived data on sedentary behaviour and physical activity at work and in leisure-time during week days among Danish office employees. We included control participants (n = 135) from the Take a Stand! Intervention; a cluster randomized controlled trial conducted in 19 offices. Sitting time and physical activity were measured using an ActiGraph GT3X+ fixed on the thigh and data were processed using Acti4 software. Variability was examined for sitting time, standing time, steps and time spent in moderate-to-vigorous physical activity (MVPA) per day by multilevel mixed linear regression modelling. Results of this study showed that the number of days needed to obtain a reliability of 80% when measuring sitting time was 4.7 days for work and 5.5 days for leisure time. For physical activity at work, 4.0 days and 4.2 days were required to measure steps and MVPA, respectively. During leisure time, more monitoring time was needed to reliably estimate physical activity (6.8 days for steps and 5.8 days for MVPA). The number of measurement days needed to reliably estimate activity patterns was greater for leisure time than for work time. The domain specific variability is of great importance to researchers and health promotion workers planning to use objective measures of sedentary behaviour and physical activity. Clinical trials NCT01996176 .

  4. Stepping reaction time and gait adaptability are significantly impaired in people with Parkinson's disease: Implications for fall risk.

    PubMed

    Caetano, Maria Joana D; Lord, Stephen R; Allen, Natalie E; Brodie, Matthew A; Song, Jooeun; Paul, Serene S; Canning, Colleen G; Menant, Jasmine C

    2018-02-01

    Decline in the ability to take effective steps and to adapt gait, particularly under challenging conditions, may be important reasons why people with Parkinson's disease (PD) have an increased risk of falling. This study aimed to determine the extent of stepping and gait adaptability impairments in PD individuals as well as their associations with PD symptoms, cognitive function and previous falls. Thirty-three older people with PD and 33 controls were assessed in choice stepping reaction time, Stroop stepping and gait adaptability tests; measurements identified as fall risk factors in older adults. People with PD had similar mean choice stepping reaction times to healthy controls, but had significantly greater intra-individual variability. In the Stroop stepping test, the PD participants were more likely to make an error (48 vs 18%), took 715 ms longer to react (2312 vs 1517 ms) and had significantly greater response variability (536 vs 329 ms) than the healthy controls. People with PD also had more difficulties adapting their gait in response to targets (poorer stepping accuracy) and obstacles (increased number of steps) appearing at short notice on a walkway. Within the PD group, higher disease severity, reduced cognition and previous falls were associated with poorer stepping and gait adaptability performances. People with PD have reduced ability to adapt gait to unexpected targets and obstacles and exhibit poorer stepping responses, particularly in a test condition involving conflict resolution. Such impaired stepping responses in Parkinson's disease are associated with disease severity, cognitive impairment and falls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. The Reliability and Validity of Measures of Gait Variability in Community-Dwelling Older Adults

    PubMed Central

    Brach, Jennifer S.; Perera, Subashan; Studenski, Stephanie; Newman, Anne B.

    2009-01-01

    Objective To examine the test-retest reliability and concurrent validity of variability of gait characteristics. Design Cross-sectional study. Setting Research laboratory. Participants Older adults (N=558) from the Cardiovascular Health Study. Interventions Not applicable. Main Outcome Measures Gait characteristics were measured using a 4-m computerized walkway. SD determined from the steps recorded were used as the measures of variability. Intraclass correlation coefficients (ICC) were calculated to examine test-retest reliability of a 4-m walk and two 4-m walks. To establish concurrent validity, the measures of gait variability were compared across levels of health, functional status, and physical activity using independent t tests and analysis of variances. Results Gait variability measures from the two 4-m walks demonstrated greater test-retest reliability than those from the single 4-m walk (ICC=.22–.48 and ICC=.40–.63, respectively). Greater step length and stance time variability were associated with poorer health, functional status and physical activity (P<.05). Conclusions Gait variability calculated from a limited number of steps has fair to good test-retest reliability and concurrent validity. Reliability of gait variability calculated from a greater number of steps should be assessed to determine if the consistency can be improved. PMID:19061741

  6. A Kalman filter for a two-dimensional shallow-water model

    NASA Technical Reports Server (NTRS)

    Parrish, D. F.; Cohn, S. E.

    1985-01-01

    A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.

  7. Time series analysis as input for clinical predictive modeling: Modeling cardiac arrest in a pediatric ICU

    PubMed Central

    2011-01-01

    Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778

  8. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    PubMed

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.

  9. Increased gait variability may not imply impaired stride-to-stride control of walking in healthy older adults: Winner: 2013 Gait and Clinical Movement Analysis Society Best Paper Award.

    PubMed

    Dingwell, Jonathan B; Salinas, Mandy M; Cusumano, Joseph P

    2017-06-01

    Older adults exhibit increased gait variability that is associated with fall history and predicts future falls. It is not known to what extent this increased variability results from increased physiological noise versus a decreased ability to regulate walking movements. To "walk", a person must move a finite distance in finite time, making stride length (L n ) and time (T n ) the fundamental stride variables to define forward walking. Multiple age-related physiological changes increase neuromotor noise, increasing gait variability. If older adults also alter how they regulate their stride variables, this could further exacerbate that variability. We previously developed a Goal Equivalent Manifold (GEM) computational framework specifically to separate these causes of variability. Here, we apply this framework to identify how both young and high-functioning healthy older adults regulate stepping from each stride to the next. Healthy older adults exhibited increased gait variability, independent of walking speed. However, despite this, these healthy older adults also concurrently exhibited no differences (all p>0.50) from young adults either in how their stride variability was distributed relative to the GEM or in how they regulated, from stride to stride, either their basic stepping variables or deviations relative to the GEM. Using a validated computational model, we found these experimental findings were consistent with increased gait variability arising solely from increased neuromotor noise, and not from changes in stride-to-stride control. Thus, age-related increased gait variability likely precedes impaired stepping control. This suggests these changes may in turn precede increased fall risk. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Trends in physical activity, health-related fitness, and gross motor skills in children during a two-year comprehensive school physical activity program.

    PubMed

    Brusseau, Timothy A; Hannon, James C; Fu, You; Fang, Yi; Nam, Kahyun; Goodrum, Sara; Burns, Ryan D

    2018-01-06

    The purpose of this study was to examine the trends in school-day step counts, health-related fitness, and gross motor skills during a two-year Comprehensive School Physical Activity Program (CSPAP) in children. Longitudinal trend analysis. Participants were a sample of children (N=240; mean age=7.9±1.2 years; 125 girls, 115 boys) enrolled in five low-income schools. Outcome variables consisted of school day step counts, Body Mass Index (BMI), estimated VO 2 Peak , and gross motor skill scores assessed using the Test of Gross Motor Development-3rd Edition (TGMD-3). Measures were collected over a two-year CSPAP including a baseline and several follow-up time-points. Multi-level mixed effects models were employed to examine time trends on each continuous outcome variable. Markov-chain transition models were employed to examine time trends for derived binary variables for school day steps, BMI, and estimated VO 2 Peak . There were statistically significant time coefficients for estimated VO 2 Peak (b=1.10mL/kg/min, 95% C.I. [0.35mL/kg/min-2.53mL/kg/min], p=0.009) and TGMD-3 scores (b=7.8, 95% C.I. [6.2-9.3], p<0.001). There were no significant changes over time for school-day step counts or BMI. Boys had greater change in odds of achieving a step count associating with 30min of school day MVPA (OR=1.25, 95% C.I. [1.02-1.48], p=0.044). A two-year CSPAP related to increases in cardio-respiratory endurance and TGMD-3 scores. School day steps and BMI were primarily stable across the two-year intervention. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  11. A General Approach to Defining Latent Growth Components

    ERIC Educational Resources Information Center

    Mayer, Axel; Steyer, Rolf; Mueller, Horst

    2012-01-01

    We present a 3-step approach to defining latent growth components. In the first step, a measurement model with at least 2 indicators for each time point is formulated to identify measurement error variances and obtain latent variables that are purged from measurement error. In the second step, we use contrast matrices to define the latent growth…

  12. Development of a time-dependent incompressible Navier-Stokes solver based on a fractional-step method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe

    1990-01-01

    The main goals are the development, validation, and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems. A solution method that combines a finite volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.

  13. Operational flood control of a low-lying delta system using large time step Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Tian, Xin; van Overloop, Peter-Jules; Negenborn, Rudy R.; van de Giesen, Nick

    2015-01-01

    The safety of low-lying deltas is threatened not only by riverine flooding but by storm-induced coastal flooding as well. For the purpose of flood control, these deltas are mostly protected in a man-made environment, where dikes, dams and other adjustable infrastructures, such as gates, barriers and pumps are widely constructed. Instead of always reinforcing and heightening these structures, it is worth considering making the most of the existing infrastructure to reduce the damage and manage the delta in an operational and overall way. In this study, an advanced real-time control approach, Model Predictive Control, is proposed to operate these structures in the Dutch delta system (the Rhine-Meuse delta). The application covers non-linearity in the dynamic behavior of the water system and the structures. To deal with the non-linearity, a linearization scheme is applied which directly uses the gate height instead of the structure flow as the control variable. Given the fact that MPC needs to compute control actions in real-time, we address issues regarding computational time. A new large time step scheme is proposed in order to save computation time, in which different control variables can have different control time steps. Simulation experiments demonstrate that Model Predictive Control with the large time step setting is able to control a delta system better and much more efficiently than the conventional operational schemes.

  14. Balance confidence is related to features of balance and gait in individuals with chronic stroke

    PubMed Central

    Schinkel-Ivy, Alison; Wong, Jennifer S.; Mansfield, Avril

    2016-01-01

    Reduced balance confidence is associated with impairments in features of balance and gait in individuals with sub-acute stroke. However, an understanding of these relationships in individuals at the chronic stage of stroke recovery is lacking. This study aimed to quantify relationships between balance confidence and specific features of balance and gait in individuals with chronic stroke. Participants completed a balance confidence questionnaire and clinical balance assessment (quiet standing, walking, and reactive stepping) at 6 months post-discharge from inpatient stroke rehabilitation. Regression analyses were performed using balance confidence as a predictor variable and quiet standing, walking, and reactive stepping outcome measures as the dependent variables. Walking velocity was positively correlated with balance confidence, while medio-lateral centre of pressure excursion (quiet standing) and double support time, step width variability, and step time variability (walking) were negatively correlated with balance confidence. This study provides insight into the relationships between balance confidence and balance and gait measures in individuals with chronic stroke, suggesting that individuals with low balance confidence exhibited impaired control of quiet standing as well as walking characteristics associated with cautious gait strategies. Future work should identify the direction of these relationships to inform community-based stroke rehabilitation programs for individuals with chronic stroke, and determine the potential utility of incorporating interventions to improve balance confidence into these programs. PMID:27955809

  15. Kinetic measures of restabilisation during volitional stepping reveal age-related alterations in the control of mediolateral dynamic stability.

    PubMed

    Singer, Jonathan C; McIlroy, William E; Prentice, Stephen D

    2014-11-07

    Research examining age-related changes in dynamic stability during stepping has recognised the importance of the restabilisation phase, subsequent to foot-contact. While regulation of the net ground reaction force (GRFnet) line of action is believed to influence dynamic stability during steady-state locomotion, such control during restabilisation remains unknown. This work explored the origins of age-related decline in mediolateral dynamic stability by examining the line of action of GRFnet relative to the centre of mass (COM) during restabilisation following voluntary stepping. Healthy younger and older adults (n=20 per group) performed three single-step tasks (varying speed and step placement), altering the challenge to stability control. Age-related differences in magnitude and intertrial variability of the angle of divergence of GRFnet line of action relative to the COM were quantified, along with the peak mediolateral and vertical GRFnet components. The angle of divergence was further examined at discrete points during restabilisation, to uncover events of potential importance to stability control. Older adults exhibited a reduced angle of divergence throughout restabilisation. Temporal and spatial constraints on stepping increased the magnitude and intertrial variability of the angle of divergence, although not differentially among the older adults. Analysis of the time-varying angle of divergence revealed age-related reductions in magnitude, with increases in timing and intertrial timing variability during the later phase of restabilisation. This work further supports the idea that age-related challenges in lateral stability control emerge during restabilisation. Age-related alterations during the later phase of restabilisation may signify challenges with reactive control. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Improvement of the variable storage coefficient method with water surface gradient as a variable

    USDA-ARS?s Scientific Manuscript database

    The variable storage coefficient (VSC) method has been used for streamflow routing in continuous hydrological simulation models such as the Agricultural Policy/Environmental eXtender (APEX) and the Soil and Water Assessment Tool (SWAT) for more than 30 years. APEX operates on a daily time step and ...

  17. CAN STABILITY REALLY PREDICT AN IMPENDING SLIP-RELATED FALL AMONG OLDER ADULTS?

    PubMed Central

    Yang, Feng; Pai, Yi-Chung

    2015-01-01

    The primary purpose of this study was to systematically evaluate and compare the predictive power of falls for a battery of stability indices, obtained during normal walking among community-dwelling older adults. One hundred and eighty seven community-dwelling older adults participated in the study. After walking regularly for 20 strides on a walkway, participants were subjected to an unannounced slip during gait under the protection of a safety harness. Full body kinematics and kinetics were monitored during walking using a motion capture system synchronized with force plates. Stability variables, including feasible-stability-region measurement, margin of stability, the maximum Floquet multiplier, the Lyapunov exponents (short- and long-term), and the variability of gait parameters (including the step length, step width, and step time) were calculated for each subject. Accuracy of predicting slip outcome (fall vs. recovery) was examined for each stability variable using logistic regression. Results showed that the feasible-stability-region measurement predicted fall incidence among these subjects with the highest accuracy (68.4%). Except for the step width (with an accuracy of 60.2%), no other stability variables could differentiate fallers from those who did not fall for the sample studied in this study. The findings from the present study could provide guidance to identify individuals at increased risk of falling using the feasible-stability-region measurement or variability of the step width. PMID:25458148

  18. [Predicting the outcome in severe injuries: an analysis of 2069 patients from the trauma register of the German Society of Traumatology (DGU)].

    PubMed

    Rixen, D; Raum, M; Bouillon, B; Schlosser, L E; Neugebauer, E

    2001-03-01

    On hospital admission numerous variables are documented from multiple trauma patients. The value of these variables to predict outcome are discussed controversially. The aim was the ability to initially determine the probability of death of multiple trauma patients. Thus, a multivariate probability model was developed based on data obtained from the trauma registry of the Deutsche Gesellschaft für Unfallchirurgie (DGU). On hospital admission the DGU trauma registry collects more than 30 variables prospectively. In the first step of analysis those variables were selected, that were assumed to be clinical predictors for outcome from literature. In a second step a univariate analysis of these variables was performed. For all primary variables with univariate significance in outcome prediction a multivariate logistic regression was performed in the third step and a multivariate prognostic model was developed. 2069 patients from 20 hospitals were prospectively included in the trauma registry from 01.01.1993-31.12.1997 (age 39 +/- 19 years; 70.0% males; ISS 22 +/- 13; 18.6% lethality). From more than 30 initially documented variables, the age, the GCS, the ISS, the base excess (BE) and the prothrombin time were the most important prognostic factors to predict the probability of death (P(death)). The following prognostic model was developed: P(death) = 1/1 + e(-[k + beta 1(age) + beta 2(GCS) + beta 3(ISS) + beta 4(BE) + beta 5(prothrombin time)]) where: k = -0.1551, beta 1 = 0.0438 with p < 0.0001, beta 2 = -0.2067 with p < 0.0001, beta 3 = 0.0252 with p = 0.0071, beta 4 = -0.0840 with p < 0.0001 and beta 5 = -0.0359 with p < 0.0001. Each of the five variables contributed significantly to the multifactorial model. These data show that the age, GCS, ISS, base excess and prothrombin time are potentially important predictors to initially identify multiple trauma patients with a high risk of lethality. With the base excess and prothrombin time value, as only variables of this multifactorial model that can be therapeutically influenced, it might be possible to better guide early and aggressive therapy.

  19. Nine time steps: ultra-fast statistical consistency testing of the Community Earth System Model (pyCECT v3.0)

    NASA Astrophysics Data System (ADS)

    Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.

    2018-02-01

    The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.

  20. Multiple dual mode counter-current chromatography with variable duration of alternating phase elution steps.

    PubMed

    Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N

    2014-06-20

    The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Changes in step-width during dual-task walking predicts falls.

    PubMed

    Nordin, E; Moe-Nilssen, R; Ramnemark, A; Lundin-Olsson, L

    2010-05-01

    The aim was to evaluate whether gait pattern changes between single- and dual-task conditions were associated with risk of falling in older people. Dual-task cost (DTC) of 230 community living, physically independent people, 75 years or older, was determined with an electronic walkway. Participants were followed up each month for 1 year to record falls. Mean and variability measures of gait characteristics for 5 dual-task conditions were compared to single-task walking for each participant. Almost half (48%) of the participants fell at least once during follow-up. Risk of falling increased in individuals where DTC for performing a subtraction task demonstrated change in mean step-width compared to single-task walking. Risk of falling decreased in individuals where DTC for carrying a cup and saucer demonstrated change compared to single-task walking in mean step-width, mean step-time, and step-length variability. Degree of change in gait characteristics related to a change in risk of falling differed between measures. Prognostic guidance for fall risk was found for the above DTCs in mean step-width with a negative likelihood ratio of 0.5 and a positive likelihood ratio of 2.3, respectively. Findings suggest that changes in step-width, step-time, and step-length with dual tasking may be related to future risk of falling. Depending on the nature of the second task, DTC may indicate either an increased risk of falling, or a protective strategy to avoid falling. Copyright 2010. Published by Elsevier B.V.

  2. An improved VSS NLMS algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan

    2017-08-01

    In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.

  3. Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Martini, Michael C.

    2011-01-01

    A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.

  4. Faster and exact implementation of the continuous cellular automaton for anisotropic etching simulations

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-02-01

    The current success of the continuous cellular automata for the simulation of anisotropic wet chemical etching of silicon in microengineering applications is based on a relatively fast, approximate, constant time stepping implementation (CTS), whose accuracy against the exact algorithm—a computationally slow, variable time stepping implementation (VTS)—has not been previously analyzed in detail. In this study we show that the CTS implementation can generate moderately wrong etch rates and overall etching fronts, thus justifying the presentation of a novel, exact reformulation of the VTS implementation based on a new state variable, referred to as the predicted removal time (PRT), and the use of a self-balanced binary search tree that enables storage and efficient access to the PRT values in each time step in order to quickly remove the corresponding surface atom/s. The proposed PRT method reduces the simulation cost of the exact implementation from {O}(N^{5/3}) to {O}(N^{3/2} log N) without introducing any model simplifications. This enables more precise simulations (only limited by numerical precision errors) with affordable computational times that are similar to the less precise CTS implementation and even faster for low reactivity systems.

  5. Using gait parameters to detect fatigue and responses to ice slurry during prolonged load carriage.

    PubMed

    Tay, Cheryl S; Lee, Jason K W; Teo, Ya S; Foo, Phildia Q Z; Tan, Pearl M S; Kong, Pui W

    2016-01-01

    This study examined (1) if changes in gait characteristics could indicate the exertional heat stress experienced during prolonged load carriage, and (2) if gait characteristics were responsive to a heat mitigation strategy. In an environmental chamber replicating tropical climatic conditions (ambient temperature 32°C, 70% relative humidity), 16 males aged 21.8 (1.2) years performed two trials of a work-rest cycle protocol consisting two bouts of 4-km treadmill walks with 30-kg load at 5.3km/h separated by a 15-min rest period. Ice slurry (ICE) or room temperature water (29°C) as a control (CON) was provided in 200-ml aliquots. The fluids were given 10min before the start, at the 15(th) and 30(th) min of each work cycle, and during each rest period. Spatio-temporal gait characteristics were obtained at the start and end of each work-rest cycle using a floor-based photocell system (OptoGait) and a high-speed video camera at 120Hz. Repeated-measure analysis of variance (trial×time) showed that with time, step width decreased (p=.024) while percent crossover steps increased (p=.008) from the 40(th) min onwards. Reduced stance time variability (-11.1%, p=.029) step width variability (-8.2%, p=.001), and percent crossover step (-18.5%, p=.010) were observed in ICE compared with CON. No differences in step length and most temporal variables were found. In conclusion, changes in frontal plane gait characteristics may indicate exertional heat stress during prolonged load carriage, and some of these changes may be mitigated with ice slurry ingestion. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Development of a time-dependent incompressible Navier-Stokes solver based on a fractional-step method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe

    1990-01-01

    The development, validation and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems are discussed. A solution method that combines a finite-volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries was previously developed for fixed-grids. In the present research effort, this solution method is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.

  7. Sub-daily Statistical Downscaling of Meteorological Variables Using Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Jitendra; Brooks, Bjørn-Gustaf J.; Thornton, Peter E

    2012-01-01

    A new open source neural network temporal downscaling model is described and tested using CRU-NCEP reanal ysis and CCSM3 climate model output. We downscaled multiple meteorological variables in tandem from monthly to sub-daily time steps while also retaining consistent correlations between variables. We found that our feed forward, error backpropagation approach produced synthetic 6 hourly meteorology with biases no greater than 0.6% across all variables and variance that was accurate within 1% for all variables except atmospheric pressure, wind speed, and precipitation. Correlations between downscaled output and the expected (original) monthly means exceeded 0.99 for all variables, which indicates thatmore » this approach would work well for generating atmospheric forcing data consistent with mass and energy conserved GCM output. Our neural network approach performed well for variables that had correlations to other variables of about 0.3 and better and its skill was increased by downscaling multiple correlated variables together. Poor replication of precipitation intensity however required further post-processing in order to obtain the expected probability distribution. The concurrence of precipitation events with expected changes in sub ordinate variables (e.g., less incident shortwave radiation during precipitation events) were nearly as consistent in the downscaled data as in the training data with probabilities that differed by no more than 6%. Our downscaling approach requires training data at the target time step and relies on a weak assumption that climate variability in the extrapolated data is similar to variability in the training data.« less

  8. Dokan Hydropower Reservoir Operation under Stochastic Conditions as Regards the Inflows and the Energy Demands

    NASA Astrophysics Data System (ADS)

    Izat Rashed, Ghamgeen

    2018-03-01

    This paper presented a way of obtaining certain operating rules on time steps for the management of a large reservoir operation with a peak hydropower plant associated to it. The rules were allowed to have the form of non-linear regression equations which link a decision variable (here the water volume in the reservoir at the end of the time step) by several parameters influencing it. This paper considered the Dokan hydroelectric development KR-Iraq, which operation data are available for. It was showing that both the monthly average inflows and the monthly power demands are random variables. A model of deterministic dynamic programming intending the minimization of the total amount of the squares differences between the demanded energy and the generated energy is run with a multitude of annual scenarios of inflows and monthly required energies. The operating rules achieved allow the efficient and safe management of the operation and it is quietly and accurately known the forecast of the inflow and of the energy demand on the next time step.

  9. Operation of Dokan Reservoir under Stochastic Conditions as Regards the Inflows and the Energy Demands

    NASA Astrophysics Data System (ADS)

    Rashed, G. I.

    2018-02-01

    This paper presented a way of obtaining certain operating rules on time steps for the management of a large reservoir operation with a peak hydropower plant associated to it. The rules were allowed to have the form of non-linear regression equations which link a decision variable (here the water volume in the reservoir at the end of the time step) by several parameters influencing it. This paper considered the Dokan hydroelectric development KR-Iraq, which operation data are available for. It was showing that both the monthly average inflows and the monthly power demands are random variables. A model of deterministic dynamic programming intending the minimization of the total amount of the squares differences between the demanded energy and the generated energy is run with a multitude of annual scenarios of inflows and monthly required energies. The operating rules achieved allow the efficient and safe management of the operation and it is quietly and accurately known the forecast of the inflow and of the energy demand on the next time step.

  10. Increased walking variability in elderly persons with congestive heart failure

    NASA Technical Reports Server (NTRS)

    Hausdorff, J. M.; Forman, D. E.; Ladin, Z.; Goldberger, A. L.; Rigney, D. R.; Wei, J. Y.

    1994-01-01

    OBJECTIVES: To determine the effects of congestive heart failure on a person's ability to walk at a steady pace while ambulating at a self-determined rate. SETTING: Beth Israel Hospital, Boston, a primary and tertiary teaching hospital, and a social activity center for elderly adults living in the community. PARTICIPANTS: Eleven elderly subjects (aged 70-93 years) with well compensated congestive heart failure (NY Heart Association class I or II), seven elderly subjects (aged 70-79 years) without congestive heart failure, and 10 healthy young adult subjects (aged 20-30 years). MEASUREMENTS: Subjects walked for 8 minutes on level ground at their own selected walking rate. Footswitches were used to measure the time between steps. Step rate (steps/minute) and step rate variability were calculated for the entire walking period, for 30 seconds during the first minute of the walk, for 30 seconds during the last minute of the walk, and for the 30-second period when each subject's step rate variability was minimal. Group means and 5% and 95% confidence intervals were computed. MAIN RESULTS: All measures of walking variability were significantly increased in the elderly subjects with congestive heart failure, intermediate in the elderly controls, and lowest in the young subjects. There was no overlap between the three groups using the minimal 30-second variability (elderly CHF vs elderly controls: P < 0.001, elderly controls vs young: P < 0.001), and no overlap between elderly subjects with and without congestive heart failure when using the overall variability. For all four measures, there was no overlap in any of the confidence intervals, and all group means were significantly different (P < 0.05).

  11. A point implicit time integration technique for slow transient flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-05-01

    We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less

  12. Variable Step Integration Coupled with the Method of Characteristics Solution for Water-Hammer Analysis, A Case Study

    NASA Technical Reports Server (NTRS)

    Turpin, Jason B.

    2004-01-01

    One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.

  13. Validity of the Instrumented Push and Release Test to Quantify Postural Responses in Persons With Multiple Sclerosis.

    PubMed

    El-Gohary, Mahmoud; Peterson, Daniel; Gera, Geetanjali; Horak, Fay B; Huisinga, Jessie M

    2017-07-01

    To test the validity of wearable inertial sensors to provide objective measures of postural stepping responses to the push and release clinical test in people with multiple sclerosis. Cross-sectional study. University medical center balance disorder laboratory. Total sample N=73; persons with multiple sclerosis (PwMS) n=52; healthy controls n=21. Stepping latency, time and number of steps required to reach stability, and initial step length were calculated using 3 inertial measurement units placed on participants' lumbar spine and feet. Correlations between inertial sensor measures and measures obtained from the laboratory-based systems were moderate to strong and statistically significant for all variables: time to release (r=.992), latency (r=.655), time to stability (r=.847), time of first heel strike (r=.665), number of steps (r=.825), and first step length (r=.592). Compared with healthy controls, PwMS demonstrated a longer time to stability and required a larger number of steps to reach stability. The instrumented push and release test is a valid measure of postural responses in PwMS and could be used as a clinical outcome measures for patient care decisions or for clinical trials aimed at improving postural control in PwMS. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  14. Training to walk amid uncertainty with Re-Step: measurements and changes with perturbation training for hemiparesis and cerebral palsy.

    PubMed

    Bar-Haim, Simona; Harries, Netta; Hutzler, Yeshayahu; Belokopytov, Mark; Dobrov, Igor

    2013-09-01

    To describe Re-Step™, a novel mechatronic shoe system that measures center of pressure (COP) gait parameters and complexity of COP dispersion while walking, and to demonstrate these measurements in healthy controls and individuals with hemiparesis and cerebral palsy (CP) before and after perturbation training. The Re-Step™ was used to induce programmed chaotic perturbations to the feet while walking for 30 min for 36 sessions over 12-weeks of training in two subjects with hemiparesis and two with CP. Baseline measurements of complexity indices (fractal dimension and approximate entropy) tended to be higher in controls than in those with disabilities, while COP variability, mean and variability of step time and COP dispersion were lower. After training the disabled subjects these measurement values tended toward those of the controls, along with a decrease in step time, 10 m walk time, average step time, percentage of double support and increased Berg balance score. This pilot trial reveals the feasibility and applicability of this unique measurement and perturbation system for evaluating functional disabilities and changes with interventions to improve walking. Implication for Rehabilitation Walking, of individuals with cerebral palsy and hemiparesis following stroke, can be viewed in terms of a rigid motor behavior that prevents adaptation to changing environmental conditions. Re-Step system (a) measures and records linear and non-linear gait parameters during free walking to provide a detailed evaluation of walking disabilities, (b) is an intervention training modality that applies unexpected perturbations during walking. This perturbation intervention may improve gait and motor functions of individuals with hemiparesis and cerebral plasy.

  15. A new theory for multistep discretizations of stiff ordinary differential equations: Stability with large step sizes

    NASA Technical Reports Server (NTRS)

    Majda, G.

    1985-01-01

    A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.

  16. A computer software system for the generation of global ocean tides including self-gravitation and crustal loading effects

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1977-01-01

    A computer software system is described which computes global numerical solutions of the integro-differential Laplace tidal equations, including dissipation terms and ocean loading and self-gravitation effects, for arbitrary diurnal and semidiurnal tidal constituents. The integration algorithm features a successive approximation scheme for the integro-differential system, with time stepping forward differences in the time variable and central differences in spatial variables.

  17. Rivastigmine for gait stability in patients with Parkinson's disease (ReSPonD): a randomised, double-blind, placebo-controlled, phase 2 trial.

    PubMed

    Henderson, Emily J; Lord, Stephen R; Brodie, Matthew A; Gaunt, Daisy M; Lawrence, Andrew D; Close, Jacqueline C T; Whone, A L; Ben-Shlomo, Y

    2016-03-01

    Falls are a frequent and serious complication of Parkinson's disease and are related partly to an underlying cholinergic deficit that contributes to gait and cognitive dysfunction in these patients. Gait dysfunction can lead to an increased variability of gait from one step to another, raising the likelihood of falls. In the ReSPonD trial we aimed to assess whether ameliorating this cholinergic deficit with the acetylcholinesterase inhibitor rivastigmine would reduce gait variability. We did this randomised, double-blind, placebo-controlled, phase 2 trial at the North Bristol NHS Trust Hospital, Bristol, UK, in patients with Parkinson's disease recruited from community and hospital settings in the UK. We included patients who had fallen at least once in the year before enrolment, were able to walk 18 m without an aid, had no previous exposure to an acetylcholinesterase inhibitor, and did not have dementia. Our clinical trials unit randomly assigned (1:1) patients to oral rivastigmine or placebo capsules (both taken twice a day) using a computer-generated randomisation sequence and web-based allocation. Rivastigmine was uptitrated from 3 mg per day to the target dose of 12 mg per day over 12 weeks. Both the trial team and patients were masked to treatment allocation. Masking was achieved with matched placebo capsules and a dummy uptitration schedule. The primary endpoint was difference in step time variability between the two groups at 32 weeks, adjusted for baseline age, cognition, step time variability, and number of falls in the previous year. We measured step time variability with a triaxial accelerometer during an 18 m walking task in three conditions: normal walking, simple dual task with phonemic verbal fluency (walking while naming words beginning with a single letter), and complex dual task switching with phonemic verbal fluency (walking while naming words, alternating between two letters of the alphabet). Analysis was by modified intention to treat; we excluded from the primary analysis patients who withdrew, died, or did not attend the 32 week assessment. This trial is registered with ISRCTN, number 19880883. Between Oct 4, 2012 and March 28, 2013, we enrolled 130 patients and randomly assigned 65 to the rivastigmine group and 65 to the placebo group. At week 32, compared with patients assigned to placebo (59 assessed), those assigned to rivastigmine (55 assessed) had improved step time variability for normal walking (ratio of geometric means 0.72, 95% CI 0.58-0.88; p=0.002) and the simple dual task (0.79; 0.62-0.99; p=0.045). Improvements in step time variability for the complex dual task did not differ between groups (0.81, 0.60-1.09; p=0.17). Gastrointestinal side-effects were more common in the rivastigmine group than in the placebo group (p<0.0001); 20 (31%) patients in the rivastigmine group versus three (5%) in the placebo group had nausea and 15 (17%) versus three (5%) had vomiting. Rivastigmine can improve gait stability and might reduce the frequency of falls. A phase 3 study is needed to confirm these findings and show cost-effectiveness of rivastigmine treatment. Parkinson's UK. Copyright © 2016 Henderson et al. Open Access article distributed under the terms of CC BY. Published by Elsevier Ltd.. All rights reserved.

  18. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.; Harrison, D. E. Jr.

    A variable time step integration algorithm for carrying out molecular dynamics simulations of atomic collision cascades is proposed which evaluates the interaction forces only once per time step. The algorithm is tested on some model problems which have exact solutions and is compared against other common methods. These comparisons show that the method has good stability and accuracy. Applications to Ar/sup +/ bombardment of Cu and Si show good accuracy and improved speed to the original method (D. E. Harrison, W. L. Gay, and H. M. Effron, J. Math. Phys. /bold 10/, 1179 (1969)).

  20. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  1. Stages in learning motor synergies: a view based on the equilibrium-point hypothesis.

    PubMed

    Latash, Mark L

    2010-10-01

    This review describes a novel view on stages in motor learning based on recent developments of the notion of synergies, the uncontrolled manifold hypothesis, and the equilibrium-point hypothesis (referent configuration) that allow to merge these notions into a single scheme of motor control. The principle of abundance and the principle of minimal final action form the foundation for analyses of natural motor actions performed by redundant sets of elements. Two main stages of motor learning are introduced corresponding to (1) discovery and strengthening of motor synergies stabilizing salient performance variable(s) and (2) their weakening when other aspects of motor performance are optimized. The first stage may be viewed as consisting of two steps, the elaboration of an adequate referent configuration trajectory and the elaboration of multi-joint (multi-muscle) synergies stabilizing the referent configuration trajectory. Both steps are expected to lead to more variance in the space of elemental variables that is compatible with a desired time profile of the salient performance variable ("good variability"). Adjusting control to other aspects of performance during the second stage (for example, esthetics, energy expenditure, time, fatigue, etc.) may lead to a drop in the "good variability". Experimental support for the suggested scheme is reviewed. Copyright © 2009 Elsevier B.V. All rights reserved.

  2. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  3. Kinematic constraints associated with the acquisition of overarm throwing part I: step and trunk actions.

    PubMed

    Stodden, David F; Langendorfer, Stephen J; Fleisig, Glenn S; Andrews, James R

    2006-12-01

    The purposes of this study were to: (a) examine differences within specific kinematic variables and ball velocity associated with developmental component levels of step and trunk action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine potential kinematic constraints associated with skilled throwing acquisition. Results indicated stride length (69.3 %) and time from stride foot contact to ball release (39. 7%) provided substantial contributions to ball velocity (p < .001). All trunk kinematic measures increased significantly with increasing component levels (p < .001). Results suggest that trunk linear and rotational velocities, degree of trunk tilt, time from stride foot contact to ball release, and ball velocity represented potential control parameters and, therefore, constraints on overarm throwing acquisition.

  4. Describing temporal variability of the mean Estonian precipitation series in climate time scale

    NASA Astrophysics Data System (ADS)

    Post, P.; Kärner, O.

    2009-04-01

    Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0,1,1) model can be interpreted to be consisting of random walk in a noisy environment (Box and Jenkins, 1976). The fitted model appears to be weakly non-stationary, that gives us the possibility to use stationary approximation if only the noise component from that sum of white noise and random walk is exploited. We get a convenient routine to generate a stationary precipitation climatology with a reasonable accuracy, since the noise component variance is much larger than the dispersion of the random walk generator. This interpretation emphasizes dominating role of a random component in the precipitation series. The result is understandable due to a small territory of Estonia that is situated in the mid-latitude cyclone track. References Box, J.E.P. and G. Jenkins 1976: Time Series Analysis, Forecasting and Control (revised edn.), Holden Day San Francisco, CA, 575 pp. Davis, A., Marshak, A., Wiscombe, W. and R. Cahalan 1996: Multifractal characterizations of intermittency in nonstationary geophysical signals and fields.in G. Trevino et al. (eds) Current Topics in Nonsstationarity Analysis. World-Scientific, Singapore, 97-158. Kärner, O. 2002: On nonstationarity and antipersistency in global temperature series. J. Geophys. Res. D107; doi:10.1029/2001JD002024. Kärner, O. 2005: Some examples on negative feedback in the Earth climate system. Centr. European J. Phys. 3; 190-208. Monin, A.S. and A.M. Yaglom 1975: Statistical Fluid Mechanics, Vol 2. Mechanics of Turbulence , MIT Press Boston Mass, 886 pp.

  5. A time to search: finding the meaning of variable activation energy.

    PubMed

    Vyazovkin, Sergey

    2016-07-28

    This review deals with the phenomenon of variable activation energy frequently observed when studying the kinetics in the liquid or solid phase. This phenomenon commonly manifests itself through nonlinear Arrhenius plots or dependencies of the activation energy on conversion computed by isoconversional methods. Variable activation energy signifies a multi-step process and has a meaning of a collective parameter linked to the activation energies of individual steps. It is demonstrated that by using appropriate models of the processes, the link can be established in algebraic form. This allows one to analyze experimentally observed dependencies of the activation energy in a quantitative fashion and, as a result, to obtain activation energies of individual steps, to evaluate and predict other important parameters of the process, and generally to gain deeper kinetic and mechanistic insights. This review provides multiple examples of such analysis as applied to the processes of crosslinking polymerization, crystallization and melting of polymers, gelation, and solid-solid morphological and glass transitions. The use of appropriate computational techniques is discussed as well.

  6. Time-Variable Transit Time Distributions in the Hyporheic Zone of a Headwater Mountain Stream

    NASA Astrophysics Data System (ADS)

    Ward, Adam S.; Schmadel, Noah M.; Wondzell, Steven M.

    2018-03-01

    Exchange of water between streams and their hyporheic zones is known to be dynamic in response to hydrologic forcing, variable in space, and to exist in a framework with nested flow cells. The expected result of heterogeneous geomorphic setting, hydrologic forcing, and between-feature interaction is hyporheic transit times that are highly variable in both space and time. Transit time distributions (TTDs) are important as they reflect the potential for hyporheic processes to impact biogeochemical transformations and ecosystems. In this study we simulate time-variable transit time distributions based on dynamic vertical exchange in a headwater mountain stream with observed, heterogeneous step-pool morphology. Our simulations include hyporheic exchange over a 600 m river corridor reach driven by continuously observed, time-variable hydrologic conditions for more than 1 year. We found that spatial variability at an instance in time is typically larger than temporal variation for the reach. Furthermore, we found reach-scale TTDs were marginally variable under all but the most extreme hydrologic conditions, indicating that TTDs are highly transferable in time. Finally, we found that aggregation of annual variation in space and time into a "master TTD" reasonably represents most of the hydrologic dynamics simulated, suggesting that this aggregation approach may provide a relevant basis for scaling from features or short reaches to entire networks.

  7. Similarities and differences among half-marathon runners according to their performance level

    PubMed Central

    Morante, Juan Carlos; Gómez-Molina, Josué; García-López, Juan

    2018-01-01

    This study aimed to identify the similarities and differences among half-marathon runners in relation to their performance level. Forty-eight male runners were classified into 4 groups according to their performance level in a half-marathon (min): Group 1 (n = 11, < 70 min), Group 2 (n = 13, < 80 min), Group 3 (n = 13, < 90 min), Group 4 (n = 11, < 105 min). In two separate sessions, training-related, anthropometric, physiological, foot strike pattern and spatio-temporal variables were recorded. Significant differences (p<0.05) between groups (ES = 0.55–3.16) and correlations with performance were obtained (r = 0.34–0.92) in training-related (experience and running distance per week), anthropometric (mass, body mass index and sum of 6 skinfolds), physiological (VO2max, RCT and running economy), foot strike pattern and spatio-temporal variables (contact time, step rate and length). At standardized submaximal speeds (11, 13 and 15 km·h-1), no significant differences between groups were observed in step rate and length, neither in contact time when foot strike pattern was taken into account. In conclusion, apart from training-related, anthropometric and physiological variables, foot strike pattern and step length were the only biomechanical variables sensitive to half-marathon performance, which are essential to achieve high running speeds. However, when foot strike pattern and running speeds were controlled (submaximal test), the spatio-temporal variables were similar. This indicates that foot strike pattern and running speed are responsible for spatio-temporal differences among runners of different performance level. PMID:29364940

  8. Polestriding Intervention Improves Gait and Axial Symptoms in Mild to Moderate Parkinson Disease.

    PubMed

    Krishnamurthi, Narayanan; Shill, Holly; O'Donnell, Darolyn; Mahant, Padma; Samanta, Johan; Lieberman, Abraham; Abbas, James

    2017-04-01

    To evaluate the effects of 12-week polestriding intervention on gait and disease severity in people with mild to moderate Parkinson disease (PD). A-B-A withdrawal study design. Outpatient movement disorder center and community facility. Individuals (N=17; 9 women [53%] and 8 men [47%]; mean age, 63.7±4.9y; range, 53-72y) with mild to moderate PD according to United Kingdom brain bank criteria with Hoehn & Yahr score ranging from 2.5 to 3.0 with a stable medication regimen and ability to tolerate "off" medication state. Twelve-week polestriding intervention with 12-week follow-up. Gait was evaluated using several quantitative temporal, spatial, and variability measures. In addition, disease severity was assessed using clinical scales such as Unified Parkinson's Disease Rating Scale (UPDRS), Hoehn & Yahr scale, and Parkinson's Disease Questionnaire-39. Step and stride lengths, gait speed, and step-time variability were improved significantly (P<.05) because of 12-week polestriding intervention. Also, the UPDRS motor score, the UPDRS axial score, and the scores of UPDRS subscales on walking and balance improved significantly after the intervention. Because increased step-time variability and decreased step and stride lengths are associated with PD severity and an increased risk of falls in PD, the observed improvements suggest that regular practice of polestriding may reduce the risk of falls and improve mobility in people with PD. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. Cost-evaluation model for clinical trials in a hospital pharmacy service.

    PubMed

    Idoate, A; Ortega, A; Carrera, F J; Aldaz, A; Giráldez, J

    1995-09-22

    A cost-evaluation model was applied to clinical trial protocols to estimate their cost for the hospital pharmacy service. The steps taken in the drug management of clinical research were identified. Fixed costs (common to all clinical trials) and variable costs (peculiar to each clinical trial) were determined for each step. The number of patients, the number of operations, the planned services (receptions, storage, drug dispensing), the timing and difficulty of the study (randomization) were included in the variable costs. The economic assessment of these items was based on the costs of the materials and means used, the cost of staff time and finally the cost of drug storage during the clinical trial. This model was applied to 24 clinical trials carried out in the University Clinic of Navarra. 83% of all pharmacy costs of a clinical trial were variable. Drug dispensing, stock management and return drugs account for 94% of the time expended. The approximate cost of the pharmacy providing investigational services was $1,766 per trial or $174 per patient. Drug storage costs were not an important source of expenditure among the variable costs (7.4%). The best way to determine the cost of a trial is to calculate the number of operations.

  10. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  11. Combined non-parametric and parametric approach for identification of time-variant systems

    NASA Astrophysics Data System (ADS)

    Dziedziech, Kajetan; Czop, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz

    2018-03-01

    Identification of systems, structures and machines with variable physical parameters is a challenging task especially when time-varying vibration modes are involved. The paper proposes a new combined, two-step - i.e. non-parametric and parametric - modelling approach in order to determine time-varying vibration modes based on input-output measurements. Single-degree-of-freedom (SDOF) vibration modes from multi-degree-of-freedom (MDOF) non-parametric system representation are extracted in the first step with the use of time-frequency wavelet-based filters. The second step involves time-varying parametric representation of extracted modes with the use of recursive linear autoregressive-moving-average with exogenous inputs (ARMAX) models. The combined approach is demonstrated using system identification analysis based on the experimental mass-varying MDOF frame-like structure subjected to random excitation. The results show that the proposed combined method correctly captures the dynamics of the analysed structure, using minimum a priori information on the model.

  12. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    PubMed

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Towards real-time verification of CO2 emissions

    NASA Astrophysics Data System (ADS)

    Peters, Glen P.; Le Quéré, Corinne; Andrew, Robbie M.; Canadell, Josep G.; Friedlingstein, Pierre; Ilyina, Tatiana; Jackson, Robert B.; Joos, Fortunat; Korsbakken, Jan Ivar; McKinley, Galen A.; Sitch, Stephen; Tans, Pieter

    2017-12-01

    The Paris Agreement has increased the incentive to verify reported anthropogenic carbon dioxide emissions with independent Earth system observations. Reliable verification requires a step change in our understanding of carbon cycle variability.

  14. An adaptive moving finite volume scheme for modeling flood inundation over dry and complex topography

    NASA Astrophysics Data System (ADS)

    Zhou, Feng; Chen, Guoxian; Huang, Yuefei; Yang, Jerry Zhijian; Feng, Hui

    2013-04-01

    A new geometrical conservative interpolation on unstructured meshes is developed for preserving still water equilibrium and positivity of water depth at each iteration of mesh movement, leading to an adaptive moving finite volume (AMFV) scheme for modeling flood inundation over dry and complex topography. Unlike traditional schemes involving position-fixed meshes, the iteration process of the AFMV scheme moves a fewer number of the meshes adaptively in response to flow variables calculated in prior solutions and then simulates their posterior values on the new meshes. At each time step of the simulation, the AMFV scheme consists of three parts: an adaptive mesh movement to shift the vertices position, a geometrical conservative interpolation to remap the flow variables by summing the total mass over old meshes to avoid the generation of spurious waves, and a partial differential equations(PDEs) discretization to update the flow variables for a new time step. Five different test cases are presented to verify the computational advantages of the proposed scheme over nonadaptive methods. The results reveal three attractive features: (i) the AMFV scheme could preserve still water equilibrium and positivity of water depth within both mesh movement and PDE discretization steps; (ii) it improved the shock-capturing capability for handling topographic source terms and wet-dry interfaces by moving triangular meshes to approximate the spatial distribution of time-variant flood processes; (iii) it was able to solve the shallow water equations with a relatively higher accuracy and spatial-resolution with a lower computational cost.

  15. Trainer variability during step training after spinal cord injury: Implications for robotic gait-training device design.

    PubMed

    Galvez, Jose A; Budovitch, Amy; Harkema, Susan J; Reinkensmeyer, David J

    2011-01-01

    Robotic devices are being developed to automate repetitive aspects of walking retraining after neurological injuries, in part because they might improve the consistency and quality of training. However, it is unclear how inconsistent manual training actually is or whether stepping quality depends strongly on the trainers' manual skill. The objective of this study was to quantify trainer variability of manual skill during step training using body-weight support on a treadmill and assess factors of trainer skill. We attached a sensorized orthosis to one leg of each patient with spinal cord injury and measured the shank kinematics and forces exerted by different trainers during six training sessions. An expert trainer rated the trainers' skill level based on videotape recordings. Between-trainer force variability was substantial, about two times greater than within-trainer variability. Trainer skill rating correlated strongly with two gait features: better knee extension during stance and fewer episodes of toe dragging. Better knee extension correlated directly with larger knee horizontal assistance force, but better toe clearance did not correlate with larger ankle push-up force; rather, it correlated with better knee and hip extension. These results are useful to inform robotic gait-training design.

  16. Willingness-to-pay for steelhead trout fishing: Implications of two-step consumer decisions with short-run endowments

    NASA Astrophysics Data System (ADS)

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2010-09-01

    Choice of the appropriate model of economic behavior is important for the measurement of nonmarket demand and benefits. Several travel cost demand model specifications are currently in use. Uncertainty exists over the efficacy of these approaches, and more theoretical and empirical study is warranted. Thus travel cost models with differing assumptions about labor markets and consumer behavior were applied to estimate the demand for steelhead trout sportfishing on an unimpounded reach of the Snake River near Lewiston, Idaho. We introduce a modified two-step decision model that incorporates endogenous time value using a latent index variable approach. The focus is on the importance of distinguishing between short-run and long-run consumer decision variables in a consistent manner. A modified Barnett two-step decision model was found superior to other models tested.

  17. Variability of gait, bilateral coordination, and asymmetry in women with fibromyalgia.

    PubMed

    Heredia-Jimenez, J; Orantes-Gonzalez, E; Soto-Hermoso, V M

    2016-03-01

    To analyze how fibromyalgia affected the variability, asymmetry, and bilateral coordination of gait walking at comfortable and fast speeds. 65 fibromyalgia (FM) patients and 50 healthy women were analyzed. Gait analysis was performed using an instrumented walkway (GAITRite system). Average walking speed, coefficient of variation (CV) of stride length, swing time, and step width data were obtained and bilateral coordination and gait asymmetry were analyzed. FM patients presented significantly lower speeds than the healthy group. FM patients obtained significantly higher values of CV_StrideLength (p=0.04; p<0.001), CV_SwingTime (p<0.001; p<0.001), CV_StepWidth (p=0.004; p<0.001), phase coordination index (p=0.01; p=0.03), and p_CV (p<0.001; p=0.001) than the control group, walking at comfortable or fast speeds. Gait asymmetry only showed significant differences in the fast condition. FM patients walked more slowly and presented a greater variability of gait and worse bilateral coordination than healthy subjects. Gait asymmetry only showed differences in the fast condition. The variability and the bilateral coordination were particularly affected by FM in women. Therefore, variability and bilateral coordination of gait could be analyzed to complement the gait evaluation of FM patients. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Predictive Variables of Half-Marathon Performance for Male Runners.

    PubMed

    Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christoper; Rodríguez-Marroyo, José A; García-López, Juan

    2017-06-01

    The aims of this study were to establish and validate various predictive equations of half-marathon performance. Seventy-eight half-marathon male runners participated in two different phases. Phase 1 (n = 48) was used to establish the equations for estimating half-marathon performance, and Phase 2 (n = 30) to validate these equations. Apart from half-marathon performance, training-related and anthropometric variables were recorded, and an incremental test on a treadmill was performed, in which physiological (VO 2max , speed at the anaerobic threshold, peak speed) and biomechanical variables (contact and flight times, step length and step rate) were registered. In Phase 1, half-marathon performance could be predicted to 90.3% by variables related to training and anthropometry (Equation 1), 94.9% by physiological variables (Equation 2), 93.7% by biomechanical parameters (Equation 3) and 96.2% by a general equation (Equation 4). Using these equations, in Phase 2 the predicted time was significantly correlated with performance (r = 0.78, 0.92, 0.90 and 0.95, respectively). The proposed equations and their validation showed a high prediction of half-marathon performance in long distance male runners, considered from different approaches. Furthermore, they improved the prediction performance of previous studies, which makes them a highly practical application in the field of training and performance.

  19. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    NASA Astrophysics Data System (ADS)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  20. Variability in the Length and Frequency of Steps of Sighted and Visually Impaired Walkers

    ERIC Educational Resources Information Center

    Mason, Sarah J.; Legge, Gordon E.; Kallie, Christopher S.

    2005-01-01

    The variability of the length and frequency of steps was measured in sighted and visually impaired walkers at three different paces. The variability was low, especially at the preferred pace, and similar for both groups. A model incorporating step counts and step frequency provides good estimates of the distance traveled. Applications to…

  1. Student failures on first-year medical basic science courses and the USMLE step 1: a retrospective study over a 20-year period.

    PubMed

    Burns, E Robert; Garrett, Judy

    2015-01-01

    Correlates of achievement in the basic science years in medical school and on the Step 1 of the United States Medical Licensing Examination® (USMLE®), (Step 1) in relation to preadmission variables have been the subject of considerable study. Preadmissions variables such as the undergraduate grade point average (uGPA) and Medical College Admission Test® (MCAT®) scores, solely or in combination, have previously been found to be predictors of achievement in the basic science years and/or on the Step 1. The purposes of this retrospective study were to: (1) determine if our statistical analysis confirmed previously published relationships between preadmission variables (MCAT, uGPA, and applicant pool size), and (2) study correlates of the number of failures in five M1 courses with those preadmission variables and failures on Step 1. Statistical analysis confirmed previously published relationships between all preadmission variables. Only one course, Microscopic Anatomy, demonstrated significant correlations with all variables studied including the Step 1 failures. Physiology correlated with three of the four variables studied, but not with the Step 1 failures. Analyses such as these provide a tool by which administrators will be able to identify what courses are or are not responding in appropriate ways to changes in the preadmissions variables that signal student performance on the Step 1. © 2014 American Association of Anatomists.

  2. Reduced high-frequency motor neuron firing, EMG fractionation, and gait variability in awake walking ALS mice

    PubMed Central

    Hadzipasic, Muhamed; Ni, Weiming; Nagy, Maria; Steenrod, Natalie; McGinley, Matthew J.; Kaushal, Adi; Thomas, Eleanor; McCormick, David A.

    2016-01-01

    Amyotrophic lateral sclerosis (ALS) is a lethal neurodegenerative disease prominently featuring motor neuron (MN) loss and paralysis. A recent study using whole-cell patch clamp recording of MNs in acute spinal cord slices from symptomatic adult ALS mice showed that the fastest firing MNs are preferentially lost. To measure the in vivo effects of such loss, awake symptomatic-stage ALS mice performing self-initiated walking on a wheel were studied. Both single-unit extracellular recordings within spinal cord MN pools for lower leg flexor and extensor muscles and the electromyograms (EMGs) of the corresponding muscles were recorded. In the ALS mice, we observed absent or truncated high-frequency firing of MNs at the appropriate time in the step cycle and step-to-step variability of the EMG, as well as flexor-extensor coactivation. In turn, kinematic analysis of walking showed step-to-step variability of gait. At the MN level, the higher frequencies absent from recordings from mutant mice corresponded with the upper range of frequencies observed for fast-firing MNs in earlier slice measurements. These results suggest that, in SOD1-linked ALS mice, symptoms are a product of abnormal MN firing due at least in part to loss of neurons that fire at high frequency, associated with altered EMG patterns and hindlimb kinematics during gait. PMID:27821773

  3. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-15

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less

  4. A stabilized Runge-Kutta-Legendre method for explicit super-time-stepping of parabolic and mixed equations

    NASA Astrophysics Data System (ADS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.

  5. Mobility assessment: Sensitivity and specificity of measurement sets in older adults

    PubMed Central

    Panzer, Victoria P.; Wakefield, Dorothy B.; Hall, Charles B.; Wolfson, Leslie I.

    2011-01-01

    Objective To identify quantitative measurement variables that characterize mobility in older adults, meet reliability and validity criteria, distinguish fall-risk and predict future falls. Design Observational study with 1-year weekly falls follow-up Setting Mobility laboratory Participants Community-dwelling volunteers (n=74; 65–94 years old) categorized at entry as 27 ‘Non-fallers’ or 47 ‘Fallers’ by Medicare criteria (1 injury fall or >1 non-injury falls in the previous year). Interventions None Outcome Measures Test-retest and within-subject reliability, criterion and concurrent validity; predictive ability indicated by observed sensitivity and specificity to entry fall-risk group (Falls-status), Tinetti Performance Oriented Mobility Assessment (POMA), Computerized Dynamic Posturography Sensory Organization Test (SOT) and subsequent falls reported weekly. Results Measurement variables were selected that met reliability (ICC > 0.6) and/or discrimination (p<.01) criteria (Clinical variables- Turn- steps, time, Gait- velocity, Step-in-tub-time, and Downstairs- time; Force plate variables- Quiet standing Romberg ratio sway-area, Maximal lean- anterior-posterior excursion, Sit-to-stand medial-lateral excursion and sway-area). Sets were created (3 clinical, 2 force plate) utilizing combinations of variables appropriate for older adults with different functional activity levels and composite scores were calculated. Scores identified entry Falls-status and concurred with POMA and SOT. The Full clinical set (5 measurement variables) produced sensitivity/specificity (.80/.74) to Falls-status. Composite scores were sensitive and specific in predicting subsequent injury falls and multiple falls compared to Falls-status, POMA or SOT. Conclusions Sets of quantitative measurement variables obtained with this mobility battery provided sensitive prediction of future injury falls and screening for multiple subsequent falls using tasks that should be appropriate to diverse participants. PMID:21621667

  6. Influence of kinematic parameters on pole vault results in top juniors.

    PubMed

    Gudelj, Ines; Zagorac, Nebojsa; Babić, Vesna

    2013-05-01

    The aim of this research was to analyse the kinematic parameters and to ascertain the influence of those parameters on the pole vault result. The entity sample of the research consisted of successful vaults of 30 athletes, whose attempts were recorded at the European Junior Athletics Championships. The examinees performed the vaults as part of the qualification competition for the finale and the finale of the competition itself The examinees were 17-19 years old, and the range of their top results was from 4.90 to 5.30 m. The results of the regression analysis showed a significant influence of the predictor variables on the effective pole vault height. The centre of body mass height was mostly influenced by the following variables: TS - takeoff velocity, LSS - last step velocity, PSS - penultimate step velocity, TAPR - trunk angle at the moment of the pole release. The following variables had lesser, but still a significant influence: CBMDM - centre of body mass distance at the pole release moment, and MCMVV - time of pole straightening. Generally, the information gained by this research indicates the significant influence of the kinematic parameters on the pole vault result. Therefore, the conclusion is that the result efficacy in the pole vault is primarily determined by the variables defined by the motor capabilities, but also by the indicators determining the vault activity realization technique. The variables that define the body position during the pole release (trunk angle and centre of mass distance) have heomost significant influence on the vault performance technique, while the motor capabilities influence the last two run up steps velocity, take off speed and the time of pole straightening.

  7. Study of Einstein-Podolsky-Rosen state for space-time variables in a two photon interference experiment

    NASA Technical Reports Server (NTRS)

    Shih, Y. H.; Sergienko, A. V.; Rubin, M. H.

    1993-01-01

    A pair of correlated photons generated from parametric down conversion was sent to two independent Michelson interferometers. Second order interference was studied by means of a coincidence measurement between the outputs of two interferometers. The reported experiment and analysis studied this second order interference phenomena from the point of view of Einstein-Podolsky-Rosen paradox. The experiment was done in two steps. The first step of the experiment used 50 psec and 3 nsec coincidence time windows simultaneously. The 50 psec window was able to distinguish a 1.5 cm optical path difference in the interferometers. The interference visibility was measured to be 38 percent and 21 percent for the 50 psec time window and 22 percent and 7 percent for the 3 nsec time window, when the optical path difference of the interferometers were 2 cm and 4 cm, respectively. By comparing the visibilities between these two windows, the experiment showed the non-classical effect which resulted from an E.P.R. state. The second step of the experiment used a 20 psec coincidence time window, which was able to distinguish a 6 mm optical path difference in the interferometers. The interference visibilities were measured to be 59 percent for an optical path difference of 7 mm. This is the first observation of visibility greater than 50 percent for a two interferometer E.P.R. experiment which demonstrates nonclassical correlation of space-time variables.

  8. Spike-frequency adaptation in the inferior colliculus.

    PubMed

    Ingham, Neil J; McAlpine, David

    2004-02-01

    We investigated spike-frequency adaptation of neurons sensitive to interaural phase disparities (IPDs) in the inferior colliculus (IC) of urethane-anesthetized guinea pigs using a stimulus paradigm designed to exclude the influence of adaptation below the level of binaural integration. The IPD-step stimulus consists of a binaural 3,000-ms tone, in which the first 1,000 ms is held at a neuron's least favorable ("worst") IPD, adapting out monaural components, before being stepped rapidly to a neuron's most favorable ("best") IPD for 300 ms. After some variable interval (1-1,000 ms), IPD is again stepped to the best IPD for 300 ms, before being returned to a neuron's worst IPD for the remainder of the stimulus. Exponential decay functions fitted to the response to best-IPD steps revealed an average adaptation time constant of 52.9 +/- 26.4 ms. Recovery from adaptation to best IPD steps showed an average time constant of 225.5 +/- 210.2 ms. Recovery time constants were not correlated with adaptation time constants. During the recovery period, adaptation to a 2nd best-IPD step followed similar kinetics to adaptation during the 1st best-IPD step. The mean adaptation time constant at stimulus onset (at worst IPD) was 34.8 +/- 19.7 ms, similar to the 38.4 +/- 22.1 ms recorded to contralateral stimulation alone. Individual time constants after stimulus onset were correlated with each other but not with time constants during the best-IPD step. We conclude that such binaurally derived measures of adaptation reflect processes that occur above the level of exclusively monaural pathways, and subsequent to the site of primary binaural interaction.

  9. The tropopause cold trap in the Australian Monsoon during STEP/AMEX 1987

    NASA Technical Reports Server (NTRS)

    Selkirk, Henry B.

    1993-01-01

    The relationship between deep convection and tropopause cold trap conditions is examined for the tropical northern Australia region during the 1986-87 summer monsoon season, emphasizing the Australia Monsoon Experiment (AMEX) period when the NASA Stratosphere-Troposphere Exchange Project (STEP) was being conducted. The factors related to the spatial and temporal variability of the cold point potential temperature (CPPT) are investigated. A framework is developed for describing the relationships among surface average equivalent potential temperature in the surface layer (AEPTSL) the height of deep convection, and stratosphere-troposphere exchange. The time-mean pattern of convection, large-scale circulation, and surface AEPTSL in the Australian monsoon and the evolution of the convective environment during the monsoon period and the extended transition season which preceded it are described. The time-mean fields of cold point level variables are examined and the statistical relationships between mean CPPT, surface AEPTSL, and deep convection are described. Day-to-day variations of CPPT are examined in terms of these time mean relationships.

  10. The Impact of ARM on Climate Modeling. Chapter 26

    NASA Technical Reports Server (NTRS)

    Randall, David A.; Del Genio, Anthony D.; Donner, Leo J.; Collins, William D.; Klein, Stephen A.

    2016-01-01

    Climate models are among humanity's most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability, and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of the Earth down to one hundred kilometers or smaller, and implicitly include the effects of processes on even smaller scales down to a micron or so. The atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM). In an AGCM, calculations are done on a three-dimensional grid, which in some of today's climate models consists of several million grid cells. For each grid cell, about a dozen variables are time-stepped as the model integrates forward from its initial conditions. These so-called prognostic variables have special importance because they are the only things that a model remembers from one time step to the next; everything else is recreated on each time step by starting from the prognostic variables and the boundary conditions. The prognostic variables typically include information about the mass of dry air, the temperature, the wind components, water vapor, various condensed-water species, and at least a few chemical species such as ozone. A good way to understand how climate models work is to consider the lengthy and complex process used to develop one. Lets imagine that a new AGCM is to be created, starting from a blank piece of paper. The model may be intended for a particular class of applications, e.g., high-resolution simulations on time scales of a few decades. Before a single line of code is written, the conceptual foundation of the model must be designed through a creative envisioning that starts from the intended application and is based on current understanding of how the atmosphere works and the inventory of mathematical methods available.

  11. Complex and Simple Clinical Reaction Times Are Associated with Gait, Balance, and Major Fall Injury in Older Subjects with Diabetic Peripheral Neuropathy

    PubMed Central

    Richardson, James K.; Eckner, James T.; Allet, Lara; Kim, Hogene; Ashton-Miller, James

    2016-01-01

    Objective To identify relationships between complex and simple clinical measures of reaction time (RTclin), and indicators of balance in older subjects with and without diabetic peripheral neuropathy (DPN). Design Prospective cohort design. Complex RTclin Accuracy, Simple RTclin Latency, and their ratio were determined using a novel device in 42 subjects (age = 69.1 ± 8.3 yrs), 26 with DPN and 16 without. Dependent variables included unipedal stance time (UST), step width variability and range on an uneven surface, and major fall-related injury over 12 months. Results In the DPN subjects the ratio of Complex RTclin Accuracy:Simple RTclin Latency was strongly associated with longer UST (r/p = .653/.004), and decreased step width variability and range (r/p = −.696/.001 and −.782/<.001, respectively) on an uneven surface. Additionally, the two DPN subjects sustaining major injuries had lower Complex RTclin Accuracy:Simple: RTclin Latency than those without. Conclusions The ratio of Complex RTclin Accuracy:Simple RTclin Latency is a potent predictor of UST and frontal plane gait variability in response to perturbations, and may predict major fall injury in older subjects with DPN. These short latency neurocognitive measures may compensate for lower limb neuromuscular impairments, and provide a more comprehensive understanding of balance and fall risk. PMID:27552354

  12. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure

    NASA Astrophysics Data System (ADS)

    Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2013-10-01

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  13. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  14. A network of spiking neurons for computing sparse representations in an energy-efficient way.

    PubMed

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B

    2012-11-01

    Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.

  15. Predicting the admission into medical school of African American college students who have participated in summer academic enrichment programs.

    PubMed

    Hesser, A; Cregler, L L; Lewis, L

    1998-02-01

    To identify cognitive and noncognitive variables as predictors of the admission into medical school of African American college students who have participated in summer academic enrichment programs (SAEPs). The study sample comprised 309 African American college students who participated in SAEPs at the Medical College of Georgia School of Medicine from 1980 to 1989 and whose educational and occupational statuses were determined by follow-up tracking. A three-step logistic regression was used to analyze the data (with alpha = .05); the criterion variable was admission to medical school. The 17 predictor variables studied were one of two types, cognitive and noncognitive. The cognitive variables were (1) Scholastic Aptitude Test mathematics (SAT-M) score, (2) SAT verbal score, (3) college grade-point average (GPA), (4) college science GPA, (5) SAEP GPA, and (6) SAEP basic science GPA (BSGPA). The noncognitive variables were (1) gender, (2) highest college level at the time of the last SAEP application, (3) type of college attended (historically African American or predominately white), (4) number of SAEPs attended, (5) career aspiration (physician or another health science option) (6) parents who were professionals, (7) parents who were health care role models, (8) evidence of leadership, (9) evidence of community service, (10) evidence of special motivation, and (11) strength of letter of recommendation in the SAEP application. For each student the rating scores for the last four noncognitive variables were determined by averaging the ratings of two judges who reviewed relevant information in each student's file. In step 1, which explained 20% of the admission decision variance, SAT-M score, SAEP BSGPA, and college GPA were the three significant cognitive predictors identified. In step 2, which explained 31% of the variance, the three cognitive predictors identified in step 1 were joined by three noncognitive predictors: career aspiration, type of college, and number of SAEPs attended. In step 3, which explained 29% of the variance, two cognitive variables (SAT-M score and SAEP BSGPA) and two noncognitive variables (career aspiration and strength of recommendation letter) were identified. The results support the concept of using both cognitive and noncognitive variables when selecting African American students for pre-medical school SAEPs.

  16. Predictors of cognitive impairment assessed by Mini Mental State Examination in community-dwelling older adults: relevance of the step test.

    PubMed

    Muscari, Antonio; Spiller, Ilaria; Bianchi, Giampaolo; Fabbri, Elisa; Forti, Paola; Magalotti, Donatella; Pandolfi, Paolo; Zoli, Marco

    2018-07-15

    Several predictors of cognitive impairment assessed by Mini Mental State Examination (MMSE) have previously been identified. However, which predictors are the most relevant and what is their effect on MMSE categories remains unclear. Cross-sectional and longitudinal study using data from 1116 older adults (72.6 ± 5.6 years, 579 female), 350 of whom were followed for 7 years. At baseline, the following variables were collected: personal data, marital status, occupation, anthropometric measures, risk factors, previous cardiovascular events, self-rated health and physical activity during the last week. Furthermore, routine laboratory tests, abdominal echography and a step test (with measurement of the time needed to ascend and descend two steps 20 times) were performed. The associations of these variables with cross-sectional cognitive deficit (MMSE < 24) and longitudinal cognitive decline (decrease of MMSE score over 7 years of follow-up) were investigated using logistic regression models. Cross-sectional cognitive deficit was independently associated with school education ≤ 5 years, prolonged step test duration, having been blue collar or housewife (P ≤ 0.0001 for all) and, with lower significance, with advanced age, previous stroke and poor recent physical activity (P < 0.05). Longitudinal cognitive decline was mainly associated with step test duration (P = 0.0001) and diastolic blood pressure (P = 0.0002). The MMSE categories mostly associated with step test duration were orientation, attention, calculation and language, while memory appeared to be poorly or not affected. In our cohort of older adults, step test duration was the most relevant predictor of cognitive impairment. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Stability Criteria for Differential Equations with Variable Time Delays

    ERIC Educational Resources Information Center

    Schley, D.; Shail, R.; Gourley, S. A.

    2002-01-01

    Time delays are an important aspect of mathematical modelling, but often result in highly complicated equations which are difficult to treat analytically. In this paper it is shown how careful application of certain undergraduate tools such as the Method of Steps and the Principle of the Argument can yield significant results. Certain delay…

  18. Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) Collocation Method for Solving Linear and Nonlinear Fokker-Planck Equations

    NASA Astrophysics Data System (ADS)

    Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.

    2018-05-01

    In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.

  19. Modelling temporal and large-scale spatial variability of soil respiration from soil water availability, temperature and vegetation productivity indices

    NASA Astrophysics Data System (ADS)

    Reichstein, M.; Rey, A.; Freibauer, A.; Tenhunen, J.; Valentini, R.; Soil Respiration Synthesis Team

    2003-04-01

    Field-chamber measurements of soil respiration from 17 different forest and shrubland sites in Europe and North America were summarized and analyzed with the goal to develop a model describing seasonal, inter-annual and spatial variability of soil respiration as affected by water availability, temperature and site properties. The analysis was performed at a daily and at a monthly time step. With the daily time step, the relative soil water content in the upper soil layer expressed as a fraction of field capacity was a good predictor of soil respiration at all sites. Among the site variables tested, those related to site productivity (e.g. leaf area index) correlated significantly with soil respiration, while carbon pool variables like standing biomass or the litter and soil carbon stocks did not show a clear relationship with soil respiration. Furthermore, it was evidenced that the effect of precipitation on soil respiration stretched beyond its direct effect via soil moisture. A general statistical non-linear regression model was developed to describe soil respiration as dependent on soil temperature, soil water content and site-specific maximum leaf area index. The model explained nearly two thirds of the temporal and inter-site variability of soil respiration with a mean absolute error of 0.82 µmol m-2 s-1. The parameterised model exhibits the following principal properties: 1) At a relative amount of upper-layer soil water of 16% of field capacity half-maximal soil respiration rates are reached. 2) The apparent temperature sensitivity of soil respiration measured as Q10 varies between 1 and 5 depending on soil temperature and water content. 3) Soil respiration under reference moisture and temperature conditions is linearly related to maximum site leaf area index. At a monthly time-scale we employed the approach by Raich et al. (2002, Global Change Biol. 8, 800-812) that used monthly precipitation and air temperature to globally predict soil respiration (T&P-model). While this model was able to explain some of the month-to-month variability of soil respiration, it failed to capture the inter-site variability, regardless whether the original or a new optimized model parameterization was used. In both cases, the residuals were strongly related to maximum site leaf area index. Thus, for a monthly time scale we developed a simple T&P&LAI-model that includes leaf area index as an additional predictor of soil respiration. This extended but still simple model performed nearly as well as the more detailed time-step model and explained 50 % of the overall and 65% of the site-to-site variability. Consequently, better estimates of globally distributed soil respiration should be obtained with the new model driven by satellite estimates of leaf area index.

  20. Stability with large step sizes for multistep discretizations of stiff ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Majda, George

    1986-01-01

    One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.

  1. Comparison of machine-learning algorithms to build a predictive model for detecting undiagnosed diabetes - ELSA-Brasil: accuracy study.

    PubMed

    Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow

    2017-01-01

    Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.

  2. Mind your step: metabolic energy cost while walking an enforced gait pattern.

    PubMed

    Wezenberg, D; de Haan, A; van Bennekom, C A M; Houdijk, H

    2011-04-01

    The energy cost of walking could be attributed to energy related to the walking movement and energy related to balance control. In order to differentiate between both components we investigated the energy cost of walking an enforced step pattern, thereby perturbing balance while the walking movement is preserved. Nine healthy subjects walked three times at comfortable walking speed on an instrumented treadmill. The first trial consisted of unconstrained walking. In the next two trials, subject walked while following a step pattern projected on the treadmill. The steps projected were either composed of the averaged step characteristics (periodic trial), or were an exact copy including the variability of the steps taken while walking unconstrained (variable trial). Metabolic energy cost was assessed and center of pressure profiles were analyzed to determine task performance, and to gain insight into the balance control strategies applied. Results showed that the metabolic energy cost was significantly higher in both the periodic and variable trial (8% and 13%, respectively) compared to unconstrained walking. The variation in center of pressure trajectories during single limb support was higher when a gait pattern was enforced, indicating a more active ankle strategy. The increased metabolic energy cost could originate from increased preparatory muscle activation to ensure proper foot placement and a more active ankle strategy to control for lateral balance. These results entail that metabolic energy cost of walking can be influenced significantly by control strategies that do not necessary alter global gait characteristics. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback

    NASA Astrophysics Data System (ADS)

    Zhang, Wenle; Liu, Jianchang

    2016-04-01

    This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.

  4. Assistive devices alter gait patterns in Parkinson disease: advantages of the four-wheeled walker.

    PubMed

    Kegelmeyer, Deb A; Parthasarathy, Sowmya; Kostyk, Sandra K; White, Susan E; Kloos, Anne D

    2013-05-01

    Gait abnormalities are a hallmark of Parkinson's disease (PD) and contribute to fall risk. Therapy and exercise are often encouraged to increase mobility and decrease falls. As disease symptoms progress, assistive devices are often prescribed. There are no guidelines for choosing appropriate ambulatory devices. This unique study systematically examined the impact of a broad range of assistive devices on gait measures during walking in both a straight path and around obstacles in individuals with PD. Quantitative gait measures, including velocity, stride length, percent swing and double support time, and coefficients of variation were assessed in 27 individuals with PD with or without one of six different devices including canes, standard and wheeled walkers (two, four or U-Step). Data were collected using the GAITRite and on a figure-of-eight course. All devices, with the exception of four-wheeled and U-Step walkers significantly decreased gait velocity. The four-wheeled walker resulted in less variability in gait measures and had less impact on spontaneous unassisted gait patterns. The U-Step walker exhibited the highest variability across all parameters followed by the two-wheeled and standard walkers. Higher variability has been correlated with increased falls. Though subjects performed better on a figure-of-eight course using either the four-wheeled or the U-Step walker, the four-wheeled walker resulted in the most consistent improvement in overall gait variables. Laser light use on a U-Step walker did not improve gait measures or safety in figure-of-eight compared to other devices. Of the devices tested, the four-wheeled-walker offered the most consistent advantages for improving mobility and safety. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. On climate prediction: how much can we expect from climate memory?

    NASA Astrophysics Data System (ADS)

    Yuan, Naiming; Huang, Yan; Duan, Jianping; Zhu, Congwen; Xoplaki, Elena; Luterbacher, Jürg

    2018-03-01

    Slowing variability in climate system is an important source of climate predictability. However, it is still challenging for current dynamical models to fully capture the variability as well as its impacts on future climate. In this study, instead of simulating the internal multi-scale oscillations in dynamical models, we discussed the effects of internal variability in terms of climate memory. By decomposing climate state x(t) at a certain time point t into memory part M(t) and non-memory part ɛ (t) , climate memory effects from the past 30 years on climate prediction are quantified. For variables with strong climate memory, high variance (over 20% ) in x(t) is explained by the memory part M(t), and the effects of climate memory are non-negligible for most climate variables, but the precipitation. Regarding of multi-steps climate prediction, a power law decay of the explained variance was found, indicating long-lasting climate memory effects. The explained variances by climate memory can remain to be higher than 10% for more than 10 time steps. Accordingly, past climate conditions can affect both short (monthly) and long-term (interannual, decadal, or even multidecadal) climate predictions. With the memory part M(t) precisely calculated from Fractional Integral Statistical Model, one only needs to focus on the non-memory part ɛ (t) , which is an important quantity that determines climate predictive skills.

  6. An improved maximum power point tracking method for a photovoltaic system

    NASA Astrophysics Data System (ADS)

    Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes

    2016-06-01

    In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.

  7. Postural adjustment errors during lateral step initiation in older and younger adults

    PubMed Central

    Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.

    2016-01-01

    The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25595953

  8. Postural adjustment errors during lateral step initiation in older and younger adults

    PubMed Central

    Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.

    2014-01-01

    The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25183162

  9. Stages in Learning Motor Synergies: A View Based on the Equilibrium-Point Hypothesis

    PubMed Central

    Latash, Mark L.

    2009-01-01

    This review describes a novel view on stages in motor learning based on recent developments of the notion of synergies, the uncontrolled manifold hypothesis, and the equilibrium-point hypothesis (referent configuration) that allow to merge these notions into a single scheme of motor control. The principle of abundance and the principle of minimal final action form the foundation for analyses of natural motor actions performed by redundant sets of elements. Two main stages of motor learning are introduced corresponding to (1) discovery and strengthening of motor synergies stabilizing salient performance variable(s), and (2) their weakening when other aspects of motor performance are optimized. The first stage may be viewed as consisting of two steps, the elaboration of an adequate referent configuration trajectory and the elaboration of multi-joint (multi-muscle) synergies stabilizing the referent configuration trajectory. Both steps are expected to lead to more variance in the space of elemental variables that is compatible with a desired time profile of the salient performance variable (“good variability”). Adjusting control to other aspects of performance during the second stage (for example, esthetics, energy expenditure, time, fatigue, etc.) may lead to a drop in the “good variability”. Experimental support for the suggested scheme is reviewed. PMID:20060610

  10. Advanced Ceramic Technology for Space Applications at NASA MSFC

    NASA Technical Reports Server (NTRS)

    Alim, Mohammad A.

    2003-01-01

    The ceramic processing technology using conventional methods is applied to the making of the state-of-the-art ceramics known as smart ceramics or intelligent ceramics or electroceramics. The sol-gel and wet chemical processing routes are excluded in this investigation considering economic aspect and proportionate benefit of the resulting product. The use of ceramic ingredients in making coatings or devices employing vacuum coating unit is also excluded in this investigation. Based on the present information it is anticipated that the conventional processing methods provide identical performing ceramics when compared to that processed by the chemical routes. This is possible when sintering temperature, heating and cooling ramps, peak temperature (sintering temperature), soak-time (hold-time), etc. are considered as variable parameters. In addition, optional calcination step prior to the sintering operation remains as a vital variable parameter. These variable parameters constitute a sintering profile to obtain a sintered product. Also it is possible to obtain identical products for more than one sintering profile attributing to the calcination step in conjunction with the variables of the sintering profile. Overall, the state-of-the-art ceramic technology is evaluated for potential thermal and electrical insulation coatings, microelectronics and integrated circuits, discrete and integrated devices, etc. applications in the space program.

  11. User's guide to four-body and three-body trajectory optimization programs

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A collection of computer programs and subroutines written in FORTRAN to calculate 4-body (sun-earth-moon-space) and 3-body (earth-moon-space) optimal trajectories is presented. The programs incorporate a variable step integration technique and a quadrature formula to correct single step errors. The programs provide capability to solve initial value problem, two point boundary value problem of a transfer from a given initial position to a given final position in fixed time, optimal 2-impulse transfer from an earth parking orbit of given inclination to a given final position and velocity in fixed time and optimal 3-impulse transfer from a given position to a given final position and velocity in fixed time.

  12. Multigrid for hypersonic viscous two- and three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Turkel, E.; Swanson, R. C.; Vatsa, V. N.; White, J. A.

    1991-01-01

    The use of a multigrid method with central differencing to solve the Navier-Stokes equations for hypersonic flows is considered. The time dependent form of the equations is integrated with an explicit Runge-Kutta scheme accelerated by local time stepping and implicit residual smoothing. Variable coefficients are developed for the implicit process that removes the diffusion limit on the time step, producing significant improvement in convergence. A numerical dissipation formulation that provides good shock capturing capability for hypersonic flows is presented. This formulation is shown to be a crucial aspect of the multigrid method. Solutions are given for two-dimensional viscous flow over a NACA 0012 airfoil and three-dimensional flow over a blunt biconic.

  13. Discontinuous Galerkin Finite Element Method for Parabolic Problems

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.

    2004-01-01

    In this paper, we develop a time and its corresponding spatial discretization scheme, based upon the assumption of a certain weak singularity of parallel ut(t) parallel Lz(omega) = parallel ut parallel2, for the discontinuous Galerkin finite element method for one-dimensional parabolic problems. Optimal convergence rates in both time and spatial variables are obtained. A discussion of automatic time-step control method is also included.

  14. A stochastical event-based continuous time step rainfall generator based on Poisson rectangular pulse and microcanonical random cascade models

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Zha, Tingting; Schümberg, Sabine; Müller, Hannes; Maurer, Thomas; Hinz, Christoph

    2017-04-01

    Rainfall variability within a storm is of major importance for fast hydrological processes, e.g. surface runoff, erosion and solute dissipation from surface soils. To investigate and simulate the impacts of within-storm variabilities on these processes, long time series of rainfall with high resolution are required. Yet, observed precipitation records of hourly or higher resolution are in most cases available only for a small number of stations and only for a few years. To obtain long time series of alternating rainfall events and interstorm periods while conserving the statistics of observed rainfall events, the Poisson model can be used. Multiplicative microcanonical random cascades have been widely applied to disaggregate rainfall time series from coarse to fine temporal resolution. We present a new coupling approach of the Poisson rectangular pulse model and the multiplicative microcanonical random cascade model that preserves the characteristics of rainfall events as well as inter-storm periods. In the first step, a Poisson rectangular pulse model is applied to generate discrete rainfall events (duration and mean intensity) and inter-storm periods (duration). The rainfall events are subsequently disaggregated to high-resolution time series (user-specified, e.g. 10 min resolution) by a multiplicative microcanonical random cascade model. One of the challenges of coupling these models is to parameterize the cascade model for the event durations generated by the Poisson model. In fact, the cascade model is best suited to downscale rainfall data with constant time step such as daily precipitation data. Without starting from a fixed time step duration (e.g. daily), the disaggregation of events requires some modifications of the multiplicative microcanonical random cascade model proposed by Olsson (1998): Firstly, the parameterization of the cascade model for events of different durations requires continuous functions for the probabilities of the multiplicative weights, which we implemented through sigmoid functions. Secondly, the branching of the first and last box is constrained to preserve the rainfall event durations generated by the Poisson rectangular pulse model. The event-based continuous time step rainfall generator has been developed and tested using 10 min and hourly rainfall data of four stations in North-Eastern Germany. The model performs well in comparison to observed rainfall in terms of event durations and mean event intensities as well as wet spell and dry spell durations. It is currently being tested using data from other stations across Germany and in different climate zones. Furthermore, the rainfall event generator is being applied in modelling approaches aimed at understanding the impact of rainfall variability on hydrological processes. Reference Olsson, J.: Evaluation of a scaling cascade model for temporal rainfall disaggregation, Hydrology and Earth System Sciences, 2, 19.30

  15. Effect of an Active Video Gaming Classroom Curriculum on Health-Related Fitness, School Day Step Counts, and Motivation in Sixth Graders.

    PubMed

    Fu, You; Burns, Ryan D

    2018-05-09

    The purpose of this study was to explore the effect of an active video gaming (AVG) classroom curriculum on health-related fitness, school day steps, and motivation in sixth graders. A convenience sample of 65 sixth graders were recruited from 2 classrooms from a school located in the Western United States. One classroom served as the comparison group (n = 32) that participated in active free play, and one classroom served as the intervention group (n = 33) that participated in an AVG curriculum for 30 minutes per day, 3 days per week, for 18 weeks. Cardiorespiratory endurance was assessed using Progressive Aerobic Cardiovascular Endurance Run laps. School day steps were recorded, and motivational variables were collected using questionnaires. Measures were collected at baseline and an 18-week posttest time point. There was a significant group × time interaction for Progressive Aerobic Cardiovascular Endurance Run laps (b = 20.7 laps; 95% confidence interval, 14.6 to 26.8; P < .001). No statistically significant interactions were found for step counts or any of the motivational variables. An 18-week AVG classroom curriculum improved cardiorespiratory endurance relative to the comparison group in sixth graders. This study supports the use of low-cost AVG curricula to improve the health-related fitness of youth.

  16. Asymmetry in Determinants of Running Speed During Curved Sprinting.

    PubMed

    Ishimura, Kazuhiro; Sakurai, Shinji

    2016-08-01

    This study investigates the potential asymmetries between inside and outside legs in determinants of curved running speed. To test these asymmetries, a deterministic model of curved running speed was constructed based on components of step length and frequency, including the distances and times of different step phases, takeoff speed and angle, velocities in different directions, and relative height of the runner's center of gravity. Eighteen athletes sprinted 60 m on the curved path of a 400-m track; trials were recorded using a motion-capture system. The variables were calculated following the deterministic model. The average speeds were identical between the 2 sides; however, the step length and frequency were asymmetric. In straight sprinting, there is a trade-off relationship between the step length and frequency; however, such a trade-off relationship was not observed in each step of curved sprinting in this study. Asymmetric vertical velocity at takeoff resulted in an asymmetric flight distance and time. The runners changed the running direction significantly during the outside foot stance because of the asymmetric centripetal force. Moreover, the outside leg had a larger tangential force and shorter stance time. These asymmetries between legs indicated the outside leg plays an important role in curved sprinting.

  17. Income and Physical Activity among Adults: Evidence from Self-Reported and Pedometer-Based Physical Activity Measurements

    PubMed Central

    Kari, Jaana T.; Pehkonen, Jaakko; Hirvensalo, Mirja; Yang, Xiaolin; Hutri-Kähönen, Nina; Raitakari, Olli T.; Tammelin, Tuija H.

    2015-01-01

    This study examined the relationship between income and physical activity by using three measures to illustrate daily physical activity: the self-reported physical activity index for leisure-time physical activity, pedometer-based total steps for overall daily physical activity, and pedometer-based aerobic steps that reflect continuous steps for more than 10 min at a time. The study population consisted of 753 adults from Finland (mean age 41.7 years; 64% women) who participated in 2011 in the follow-up of the ongoing Young Finns study. Ordinary least squares models were used to evaluate the associations between income and physical activity. The consistency of the results was explored by using register-based income information from Statistics Finland, employing the instrumental variable approach, and dividing the pedometer-based physical activity according to weekdays and weekend days. The results indicated that higher income was associated with higher self-reported physical activity for both genders. The results were robust to the inclusion of the control variables and the use of register-based income information. However, the pedometer-based results were gender-specific and depended on the measurement day (weekday vs. weekend day). In more detail, the association was positive for women and negative or non-existing for men. According to the measurement day, among women, income was positively associated with aerobic steps despite the measurement day and with totals steps measured on the weekend. Among men, income was negatively associated with aerobic steps measured on weekdays. The results indicate that there is an association between income and physical activity, but the association is gender-specific and depends on the measurement type of physical activity. PMID:26317865

  18. Income and Physical Activity among Adults: Evidence from Self-Reported and Pedometer-Based Physical Activity Measurements.

    PubMed

    Kari, Jaana T; Pehkonen, Jaakko; Hirvensalo, Mirja; Yang, Xiaolin; Hutri-Kähönen, Nina; Raitakari, Olli T; Tammelin, Tuija H

    2015-01-01

    This study examined the relationship between income and physical activity by using three measures to illustrate daily physical activity: the self-reported physical activity index for leisure-time physical activity, pedometer-based total steps for overall daily physical activity, and pedometer-based aerobic steps that reflect continuous steps for more than 10 min at a time. The study population consisted of 753 adults from Finland (mean age 41.7 years; 64% women) who participated in 2011 in the follow-up of the ongoing Young Finns study. Ordinary least squares models were used to evaluate the associations between income and physical activity. The consistency of the results was explored by using register-based income information from Statistics Finland, employing the instrumental variable approach, and dividing the pedometer-based physical activity according to weekdays and weekend days. The results indicated that higher income was associated with higher self-reported physical activity for both genders. The results were robust to the inclusion of the control variables and the use of register-based income information. However, the pedometer-based results were gender-specific and depended on the measurement day (weekday vs. weekend day). In more detail, the association was positive for women and negative or non-existing for men. According to the measurement day, among women, income was positively associated with aerobic steps despite the measurement day and with totals steps measured on the weekend. Among men, income was negatively associated with aerobic steps measured on weekdays. The results indicate that there is an association between income and physical activity, but the association is gender-specific and depends on the measurement type of physical activity.

  19. Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Ash, Robert L.

    1992-01-01

    The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.

  20. On the primary variable switching technique for simulating unsaturated-saturated flows

    NASA Astrophysics Data System (ADS)

    Diersch, H.-J. G.; Perrochet, P.

    Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.

  1. Influence of Age, Maturity, and Body Size on the Spatiotemporal Determinants of Maximal Sprint Speed in Boys.

    PubMed

    Meyers, Robert W; Oliver, Jon L; Hughes, Michael G; Lloyd, Rhodri S; Cronin, John B

    2017-04-01

    Meyers, RW, Oliver, JL, Hughes, MG, Lloyd, RS, and Cronin, JB. Influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. J Strength Cond Res 31(4): 1009-1016, 2017-The aim of this study was to investigate the influence of age, maturity, and body size on the spatiotemporal determinants of maximal sprint speed in boys. Three-hundred and seventy-five boys (age: 13.0 ± 1.3 years) completed a 30-m sprint test, during which maximal speed, step length, step frequency, contact time, and flight time were recorded using an optical measurement system. Body mass, height, leg length, and a maturity offset represented somatic variables. Step frequency accounted for the highest proportion of variance in speed (∼58%) in the pre-peak height velocity (pre-PHV) group, whereas step length explained the majority of the variance in speed (∼54%) in the post-PHV group. In the pre-PHV group, mass was negatively related to speed, step length, step frequency, and contact time; however, measures of stature had a positive influence on speed and step length yet a negative influence on step frequency. Speed and step length were also negatively influence by mass in the post-PHV group, whereas leg length continued to positively influence step length. The results highlighted that pre-PHV boys may be deemed step frequency reliant, whereas those post-PHV boys may be marginally step length reliant. Furthermore, the negative influence of body mass, both pre-PHV and post-PHV, suggests that training to optimize sprint performance in youth should include methods such as plyometric and strength training, where a high neuromuscular focus and the development force production relative to body weight are key foci.

  2. Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm

    NASA Astrophysics Data System (ADS)

    Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.

    2014-08-01

    This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.

  3. Computational Study of Axisymmetric Off-Design Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DalBello, Teryn; Georgiadis, Nicholas; Yoder, Dennis; Keith, Theo

    2003-01-01

    Computational Fluid Dynamics (CFD) analyses of axisymmetric circular-arc boattail nozzles operating off-design at transonic Mach numbers have been completed. These computations span the very difficult transonic flight regime with shock-induced separations and strong adverse pressure gradients. External afterbody and internal nozzle pressure distributions computed with the Wind code are compared with experimental data. A range of turbulence models were examined, including the Explicit Algebraic Stress model. Computations have been completed at freestream Mach numbers of 0.9 and 1.2, and nozzle pressure ratios (NPR) of 4 and 6. Calculations completed with variable time-stepping (steady-state) did not converge to a true steady-state solution. Calculations obtained using constant timestepping (timeaccurate) indicate less variations in flow properties compared with steady-state solutions. This failure to converge to a steady-state solution was the result of using variable time-stepping with large-scale separations present in the flow. Nevertheless, time-averaged boattail surface pressure coefficient and internal nozzle pressures show reasonable agreement with experimental data. The SST turbulence model demonstrates the best overall agreement with experimental data.

  4. Two complementary approaches to quantify variability in heat resistance of spores of Bacillus subtilis.

    PubMed

    den Besten, Heidy M W; Berendsen, Erwin M; Wells-Bennik, Marjon H J; Straatsma, Han; Zwietering, Marcel H

    2017-07-17

    Realistic prediction of microbial inactivation in food requires quantitative information on variability introduced by the microorganisms. Bacillus subtilis forms heat resistant spores and in this study the impact of strain variability on spore heat resistance was quantified using 20 strains. In addition, experimental variability was quantified by using technical replicates per heat treatment experiment, and reproduction variability was quantified by using two biologically independent spore crops for each strain that were heat treated on different days. The fourth-decimal reduction times and z-values were estimated by a one-step and two-step model fitting procedure. Grouping of the 20 B. subtilis strains into two statistically distinguishable groups could be confirmed based on their spore heat resistance. The reproduction variability was higher than experimental variability, but both variabilities were much lower than strain variability. The model fitting approach did not significantly affect the quantification of variability. Remarkably, when strain variability in spore heat resistance was quantified using only the strains producing low-level heat resistant spores, then this strain variability was comparable with the previously reported strain variability in heat resistance of vegetative cells of Listeria monocytogenes, although in a totally other temperature range. Strains that produced spores with high-level heat resistance showed similar temperature range for growth as strains that produced low-level heat resistance. Strain variability affected heat resistance of spores most, and therefore integration of this variability factor in modelling of spore heat resistance will make predictions more realistic. Copyright © 2017. Published by Elsevier B.V.

  5. Effects of a multicomponent exercise program in institutionalized elders with Alzheimer's disease.

    PubMed

    Sampaio, Arnaldina; Marques, Elisa A; Mota, Jorge; Carvalho, Joana

    2016-10-18

    This study examined the effect of a Multicomponent Training (MT) intervention on cognitive function, functional fitness and anthropometric variables in institutionalized patients with Alzheimer's disease (AD). Thirty-seven institutionalized elders (84.05 ± 5.58 years) clinically diagnosed with AD (mild and moderate stages) were divided into two groups: Experimental Group (EG, n = 19) and Control Group (CG, n = 18). The EG participated in a six-month supervised MT program (aerobic, muscular resistance, flexibility and postural exercises) of 45-55 minutes/session, twice/week. Cognitive function (MMSE), physical fitness (Senior Fitness Test) and anthropometric variables (Body Mass Index and Waist Circumference), were assessed before (M1), after three months (M2) and after six months (M3) of the experimental protocol. A two-way ANOVA, with repeated measures, revealed significant group and time interactions on cognitive function, chair stand, arm curl, 2-min step, 8-foot up-and-go (UG), chair sit-and-reach (CSR) and back scratch tests as well as waist circumference. Accordingly, for those variables a different response in each group was evident over the time, supported by a significantly better EG performance in chair stand, arm curl, 2-min step, UG, CSR and back scratch tests from M1 to M3, and a significant increase in MMSE from M1 to M2. The CG's performance decreased over time (M1 to M3) in chair stand, arm curl, 2-min step, UG, CSR, back scratch and MMSE. Results suggest that MT programs may be an important non-pharmacological strategy to improve physical and cognitive functions in institutionalized AD patients. © The Author(s) 2016.

  6. Effect of 8 weeks of concurrent plyometric and running training on spatiotemporal and physiological variables of novice runners.

    PubMed

    Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christopher; García-López, Juan

    2018-03-01

    Concurrent plyometric and running training has the potential to improve running economy (RE) and performance through increasing muscle strength and power, but the possible effect on spatiotemporal parameters of running has not been studied yet. The aim of this study was to compare the effect of 8 weeks of concurrent plyometric and running training on spatiotemporal parameters and physiological variables of novice runners. Twenty-five male participants were randomly assigned into two training groups; running group (RG) (n = 11) and running + plyometric group (RPG) (n = 14). Both groups performed 8 weeks of running training programme, and only the RPG performed a concurrent plyometric training programme (two sessions per week). Anthropometric, physiological (VO 2max , heart rate and RE) and spatiotemporal variables (contact and flight times, step rate and length) were registered before and after the intervention. In comparison to RG, the RPG reduced step rate and increased flight times at the same running speeds (P < .05) while contact times remained constant. Significant increases in pre- and post-training (P < .05) were found in RPG for squat jump and 5 bound test, while RG remained unchanged. Peak speed, ventilatory threshold (VT) speed and respiratory compensation threshold (RCT) speed increased (P < .05) for both groups, although peak speed and VO 2max increased more in the RPG than in the RG. In conclusion, concurrent plyometric and running training entails a reduction in step rate, as well as increases in VT speed, RCT speed, peak speed and VO 2max . Athletes could benefit from plyometric training in order to improve their strength, which would contribute to them attaining higher running speeds.

  7. Ground Reaction Forces of the Lead and Trail Limbs when Stepping Over an Obstacle

    PubMed Central

    Bovonsunthonchai, Sunee; Khobkhun, Fuengfa; Vachalathiti, Roongtiwa

    2015-01-01

    Background Precise force generation and absorption during stepping over different obstacles need to be quantified for task accomplishment. This study aimed to quantify how the lead limb (LL) and trail limb (TL) generate and absorb forces while stepping over obstacle of various heights. Material/Methods Thirteen healthy young women participated in the study. Force data were collected from 2 force plates when participants stepped over obstacles. Two limbs (right LL and left TL) and 4 conditions of stepping (no obstacle, stepping over 5 cm, 20 cm, and 30 cm obstacle heights) were tested for main effect and interaction effect by 2-way ANOVA. Paired t-test and 1-way repeated-measure ANOVA were used to compare differences of variables between limbs and among stepping conditions, respectively. The main effects on the limb were found in first peak vertical force, minimum vertical force, propulsive peak force, and propulsive impulse. Results Significant main effects of condition were found in time to minimum force, time to the second peak force, time to propulsive peak force, first peak vertical force, braking peak force, propulsive peak force, vertical impulse, braking impulse, and propulsive impulse. Interaction effects of limb and condition were found in first peak vertical force, propulsive peak force, braking impulse, and propulsive impulse. Conclusions Adaptations of force generation in the LL and TL were found to involve adaptability to altered external environment during stepping in healthy young adults. PMID:26169293

  8. Impact of Preadmission Variables on USMLE Step 1 and Step 2 Performance

    ERIC Educational Resources Information Center

    Kleshinski, James; Khuder, Sadik A.; Shapiro, Joseph I.; Gold, Jeffrey P.

    2009-01-01

    Purpose: To examine the predictive ability of preadmission variables on United States Medical Licensing Examinations (USMLE) step 1 and step 2 performance, incorporating the use of a neural network model. Method: Preadmission data were collected on matriculants from 1998 to 2004. Linear regression analysis was first used to identify predictors of…

  9. An approach to predict Sudden Cardiac Death (SCD) using time domain and bispectrum features from HRV signal.

    PubMed

    Houshyarifar, Vahid; Chehel Amirani, Mehdi

    2016-08-12

    In this paper we present a method to predict Sudden Cardiac Arrest (SCA) with higher order spectral (HOS) and linear (Time) features extracted from heart rate variability (HRV) signal. Predicting the occurrence of SCA is important in order to avoid the probability of Sudden Cardiac Death (SCD). This work is a challenge to predict five minutes before SCA onset. The method consists of four steps: pre-processing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In second step, bispectrum features of HRV signal and time-domain features are obtained. Six features are extracted from bispectrum and two features from time-domain. In the next step, these features are reduced to one feature by the linear discriminant analysis (LDA) technique. Finally, KNN and support vector machine-based classifiers are used to classify the HRV signals. We used two database named, MIT/BIH Sudden Cardiac Death (SCD) Database and Physiobank Normal Sinus Rhythm (NSR). In this work we achieved prediction of SCD occurrence for six minutes before the SCA with the accuracy over 91%.

  10. Correlates of Injury-forced Work Reduction for Massage Therapists and Bodywork Practitioners.

    PubMed

    Blau, Gary; Monos, Christopher; Boyer, Ed; Davis, Kathleen; Flanagan, Richard; Lopez, Andrea; Tatum, Donna S

    2013-01-01

    Injury-forced work reduction (IFWR) has been acknowledged as an all-too-common occurrence for massage therapists and bodywork practitioners (M & Bs). However, little prior research has specifically investigated demographic, work attitude, and perceptual correlates of IFWR among M & Bs. To test two hypotheses, H1 and H2. H1 is that the accumulated cost variables set ( e.g., accumulated costs, continuing education costs) will account for a significant amount of IFWR variance beyond control/demographic (e.g., social desirability response bias, gender, years in practice, highest education level) and work attitude/perception variables (e.g., job satisfaction, affective occupation commitment, occupation identification, limited occupation alternatives) sets. H2 is that the two exhaustion variables (i.e., physical exhaustion, work exhaustion) set will account for significant IFWR variance beyond control/demographic, work attitude/perception, and accumulated cost variables sets. An online survey sample of 2,079 complete-data M & Bs was collected. Stepwise regression analysis was used to test the study hypotheses. The research design first controlled for control/demographic (Step1) and work attitude/perception variables sets (Step 2), before then testing for the successive incremental impact of two variable sets, accumulated costs (Step 3) and exhaustion variables (Step 4) for explaining IFWR. RESULTS SUPPORTED BOTH STUDY HYPOTHESES: accumulated cost variables set (H1) and exhaustion variables set (H2) each significantly explained IFWR after the control/demographic and work attitude/perception variables sets. The most important correlate for explaining IFWR was higher physical exhaustion, but work exhaustion was also significant. It is not just physical "wear and tear", but also "mental fatigue", that can lead to IFWR for M & Bs. Being female, having more years in practice, and having higher continuing education costs were also significant correlates of IFWR. Lower overall levels of work exhaustion, physical exhaustion, and IFWR were found in the present sample. However, since both types of exhaustion significantly and positively impact IFWR, taking sufficient time between massages and, if possible, varying one's massage technique to replenish one's physical and mental energy seem important. Failure to take required continuing education units, due to high costs, also increases risk for IFWR. Study limitations and future research issues are discussed.

  11. Enhanced conformational sampling via novel variable transformations and very large time-step molecular dynamics

    NASA Astrophysics Data System (ADS)

    Tuckerman, Mark

    2006-03-01

    One of the computational grand challenge problems is to develop methodology capable of sampling conformational equilibria in systems with rough energy landscapes. If met, many important problems, most notably protein folding, could be significantly impacted. In this talk, two new approaches for addressing this problem will be presented. First, it will be shown how molecular dynamics can be combined with a novel variable transformation designed to warp configuration space in such a way that barriers are reduced and attractive basins stretched. This method rigorously preserves equilibrium properties while leading to very large enhancements in sampling efficiency. Extensions of this approach to the calculation/exploration of free energy surfaces will be discussed. Next, a new very large time-step molecular dynamics method will be introduced that overcomes the resonances which plague many molecular dynamics algorithms. The performance of the methods is demonstrated on a variety of systems including liquid water, long polymer chains simple protein models, and oligopeptides.

  12. An Accelerometer as an Alternative to a Force Plate for the Step-Up-and-Over Test.

    PubMed

    Bailey, Christopher A; Costigan, Patrick A

    2015-12-01

    The step-up-and-over test has been used successfully to examine knee function after knee injury. Knee function is quantified using the following variables extracted from force plate data: the maximal force exerted during the lift, the maximal impact force at landing, and the total time to complete the step. For various reasons, including space and cost, it is unlikely that all clinicians will have access to a force plate. The purpose of the study was to determine if the step-up-and-over test could be simplified by using an accelerometer. The step-up-and-over test was performed by 17 healthy young adults while being measured with both a force plate and a 3-axis accelerometer mounted at the low back. Results showed that the accelerometer and force plate measures were strongly correlated for all 3 variables (r = .90-.98, Ps < .001) and that the accelerometer values for the lift and impact indices were 6-7% higher (Ps < .01) and occurred 0.07-0.1 s later than the force plate (Ps < .05). The accelerometer returned values highly correlated to those from a force plate. Compared with a force plate, a wireless, 3-axis accelerometer is a less expensive and more portable system with which to measure the step-up-and-over test.

  13. Factors Associated With Ambulatory Activity in De Novo Parkinson Disease.

    PubMed

    Christiansen, Cory; Moore, Charity; Schenkman, Margaret; Kluger, Benzi; Kohrt, Wendy; Delitto, Anthony; Berman, Brian; Hall, Deborah; Josbeno, Deborah; Poon, Cynthia; Robichaud, Julie; Wellington, Toby; Jain, Samay; Comella, Cynthia; Corcos, Daniel; Melanson, Ed

    2017-04-01

    Objective ambulatory activity during daily living has not been characterized for people with Parkinson disease prior to initiation of dopaminergic medication. Our goal was to characterize ambulatory activity based on average daily step count and examine determinants of step count in nonexercising people with de novo Parkinson disease. We analyzed baseline data from a randomized controlled trial, which excluded people performing regular endurance exercise. Of 128 eligible participants (mean ± SD = 64.3 ± 8.6 years), 113 had complete accelerometer data, which were used to determine daily step count. Multiple linear regression was used to identify factors associated with average daily step count over 10 days. Candidate explanatory variable categories were (1) demographics/anthropometrics, (2) Parkinson disease characteristics, (3) motor symptom severity, (4) nonmotor and behavioral characteristics, (5) comorbidities, and (6) cardiorespiratory fitness. Average daily step count was 5362 ± 2890 steps per day. Five factors explained 24% of daily step count variability, with higher step count associated with higher cardiorespiratory fitness (10%), no fear/worry of falling (5%), lower motor severity examination score (4%), more recent time since Parkinson disease diagnosis (3%), and the presence of a cardiovascular condition (2%). Daily step count in nonexercising people recruited for this intervention trial with de novo Parkinson disease approached sedentary lifestyle levels. Further study is warranted for elucidating factors explaining ambulatory activity, particularly cardiorespiratory fitness, and fear/worry of falling. Clinicians should consider the costs and benefits of exercise and activity behavior interventions immediately after diagnosis of Parkinson disease to attenuate the health consequences of low daily step count.Video Abstract available for more insights from the authors (see Video, Supplemental Digital Content 1, http://links.lww.com/JNPT/A170).

  14. Arbitrary-step randomly delayed robust filter with application to boost phase tracking

    NASA Astrophysics Data System (ADS)

    Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang

    2018-04-01

    The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.

  15. Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen

    2016-03-31

    In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less

  16. Elderly Fallers Enhance Dynamic Stability Through Anticipatory Postural Adjustments during a Choice Stepping Reaction Time

    PubMed Central

    Tisserand, Romain; Robert, Thomas; Chabaud, Pascal; Bonnefoy, Marc; Chèze, Laurence

    2016-01-01

    In the case of disequilibrium, the capacity to step quickly is critical to avoid falling in elderly. This capacity can be simply assessed through the choice stepping reaction time test (CSRT), where elderly fallers (F) take longer to step than elderly non-fallers (NF). However, the reasons why elderly F elongate their stepping time remain unclear. The purpose of this study is to assess the characteristics of anticipated postural adjustments (APA) that elderly F develop in a stepping context and their consequences on the dynamic stability. Forty-four community-dwelling elderly subjects (20 F and 24 NF) performed a CSRT where kinematics and ground reaction forces were collected. Variables were analyzed using two-way repeated measures ANOVAs. Results for F compared to NF showed that stepping time is elongated, due to a longer APA phase. During APA, they seem to use two distinct balance strategies, depending on the axis: in the anteroposterior direction, we measured a smaller backward movement and slower peak velocity of the center of pressure (CoP); in the mediolateral direction, the CoP movement was similar in amplitude and peak velocity between groups but lasted longer. The biomechanical consequence of both strategies was an increased margin of stability (MoS) at foot-off, in the respective direction. By elongating their APA, elderly F use a safer balance strategy that prioritizes dynamic stability conditions instead of the objective of the task. Such a choice in balance strategy probably comes from muscular limitations and/or a higher fear of falling and paradoxically indicates an increased risk of fall. PMID:27965561

  17. User's instructions for the cardiovascular Walters model

    NASA Technical Reports Server (NTRS)

    Croston, R. C.

    1973-01-01

    The model is a combined, steady-state cardiovascular and thermal model. It was originally developed for interactive use, but was converted to batch mode simulation for the Sigma 3 computer. The model has the purpose to compute steady-state circulatory and thermal variables in response to exercise work loads and environmental factors. During a computer simulation run, several selected variables are printed at each time step. End conditions are also printed at the completion of the run.

  18. Dyslexia Laws in the USA

    ERIC Educational Resources Information Center

    Youman, Martha; Mather, Nancy

    2013-01-01

    Throughout the various states of the USA, the appropriate identification of dyslexia and the timely provision of interventions are characterized by variability and inconsistency. Several states have recognized the existence of this disorder and the well-established need for services. These states have taken proactive steps to implement laws and…

  19. Modeling the Cienega de Santa Clara, Sonora, Mexico

    NASA Astrophysics Data System (ADS)

    Huckelbridge, K. H.; Hidalgo, H.; Dracup, J.; Ibarra Obando, S. E.

    2002-12-01

    The Cienega de Santa Clara is a created wetland located in the Colorado River Delta (CRD), in Sonora, Mexico. It is sustained by agricultural return flows from the Wellton-Mohawk Irrigation District in Arizona and the Mexicali Valley in Mexico. As one of the few wetlands remaining in the CRD, it provides critical habitat for several species of fish and birds, including several endangered species such as the desert pupfish (Cyprinodon macularius) and the Yuma clapper rail (Rallus longirostris yumanensis). However, this habitat may be in jeopardy if the quantity and quality of the agricultural inflows are significantly altered. This study seeks to develop a model that describes the dynamics of wetland hydrology, vegetation, and water quality as a function of inflow variability and salinity loading. The model is divided into four modules set up in sequence. For a given time step, the sequence begins with the first module, which utilizes basic diffusion equations to simulate mixing processes in the shallow wetland when the flow and concentration of the inflow deviate from the baseline. The second module develops a vegetated-area response to the resulting distribution of salinity in the wetland. Using the new area of vegetation cover determined by the second module and various meteorological variables, the third module calculates the evapotranspiration rate for the wetland, using the Penman-Montieth equation. Finally, the fourth module takes the overall evapotranspiration rate, along with precipitation, inflow and outflow and calculates the new volume of the wetland using a water balance. This volume then establishes the initial variables for the next time step. The key outputs from the model are salinity concentration, area of vegetation cover, and wetland volume for each time step. Results from this model will illustrate how the wetland's hydrology, vegetation, and water quality are altered over time under various inflow scenarios. These outputs can ultimately be used to assess the impacts to wetland wildlife and overall ecosystem health, and to determine the best management strategy for the Cienega de Santa Clara.

  20. A High Space-Time Resolution Dataset Linking Meteorological Forcing and Hydro-Sedimentary Response in a Mesoscale Mediterranean Catchment (Auzon) of the Ardèche Region, France

    NASA Astrophysics Data System (ADS)

    Nord, G.; Braud, I.; Boudevillain, B.; Gérard, S.; Molinié, G.; Vandervaere, J. P.; Huza, J.; Le Coz, J.; Dramais, G.; Legout, C.; Berne, A.; Grazioli, J.; Raupach, T.; Van Baelen, J.; Wijbrans, A.; Delrieu, G.; Andrieu, J.; Caliano, M.; Aubert, C.; Teuling, R.; Le Boursicaud, R.; Branger, F.; Vincendon, B.; Horner, I.

    2014-12-01

    A comprehensive hydrometeorological dataset is presented spanning the period 1 Jan 2011-31 Dec 2014 to improve the understanding and simulation of the hydrological processes leading to flash floods in a mesoscale catchment (Auzon, 116 km2) of the Mediterranean region. The specificity of the dataset is its high space-time resolution, especially concerning rainfall and the hydrological response which is particularly adapted to the highly spatially variable rainfall events that may occur in this region. This type of dataset is rare in scientific literature because of the quantity and type of sensors for meteorology and surface hydrology. Rainfall data include continuous precipitation measured by rain-gages (5 min time step for the research network of 21 rain-gages and 1h time step for the operational network of 9 rain-gages), S-band Doppler dual-polarization radar (1 km2, 5 min resolution), and disdrometers (11 sensors working at 1 min time step). During the special observation period (SOP-1) and enhanced observation period (Sep-Dec 2012, Sep-Dec 2013) of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) project, two X-band radars provided precipitation measurements at very fine spatial and temporal scales (1 ha, 5 min). Meteorological data are taken from the operational surface weather observation stations of Meteo France at the hourly time resolution (6 stations in the region of interest). The monitoring of surface hydrology and suspended sediment is multi-scale and based on nested catchments. Three hydrometric stations measure water discharge and additional physico-chemical variables at a 2-10 min time resolution. Two experimental plots monitor overland flow and erosion at 1 min time resolution on a hillslope with vineyard. A network of 11 gauges continuously measures water level and temperature in headwater subcatchments at a time resolution of 2-5 min. A network of soil moisture sensors enable the continuous measurement of soil volumetric water content at 20 min time resolution at 9 sites. Additionally, opportunistic observations (soil moisture measurements and stream gauging) were performed during floods between 2012 and 2014. The data are appropriate for understanding rainfall variability, improving areal rainfall estimations and progress in distributed hydrological modelling.

  1. A High Space-Time Resolution Dataset Linking Meteorological Forcing and Hydro-Sedimentary Response in a Mesoscale Mediterranean Catchment (Auzon) of the Ardèche Region, France

    NASA Astrophysics Data System (ADS)

    Nord, G.; Braud, I.; Boudevillain, B.; Gérard, S.; Molinié, G.; Vandervaere, J. P.; Huza, J.; Le Coz, J.; Dramais, G.; Legout, C.; Berne, A.; Grazioli, J.; Raupach, T.; Van Baelen, J.; Wijbrans, A.; Delrieu, G.; Andrieu, J.; Caliano, M.; Aubert, C.; Teuling, R.; Le Boursicaud, R.; Branger, F.; Vincendon, B.; Horner, I.

    2015-12-01

    A comprehensive hydrometeorological dataset is presented spanning the period 1 Jan 2011-31 Dec 2014 to improve the understanding and simulation of the hydrological processes leading to flash floods in a mesoscale catchment (Auzon, 116 km2) of the Mediterranean region. The specificity of the dataset is its high space-time resolution, especially concerning rainfall and the hydrological response which is particularly adapted to the highly spatially variable rainfall events that may occur in this region. This type of dataset is rare in scientific literature because of the quantity and type of sensors for meteorology and surface hydrology. Rainfall data include continuous precipitation measured by rain-gages (5 min time step for the research network of 21 rain-gages and 1h time step for the operational network of 9 rain-gages), S-band Doppler dual-polarization radar (1 km2, 5 min resolution), and disdrometers (11 sensors working at 1 min time step). During the special observation period (SOP-1) and enhanced observation period (Sep-Dec 2012, Sep-Dec 2013) of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) project, two X-band radars provided precipitation measurements at very fine spatial and temporal scales (1 ha, 5 min). Meteorological data are taken from the operational surface weather observation stations of Meteo France at the hourly time resolution (6 stations in the region of interest). The monitoring of surface hydrology and suspended sediment is multi-scale and based on nested catchments. Three hydrometric stations measure water discharge and additional physico-chemical variables at a 2-10 min time resolution. Two experimental plots monitor overland flow and erosion at 1 min time resolution on a hillslope with vineyard. A network of 11 gauges continuously measures water level and temperature in headwater subcatchments at a time resolution of 2-5 min. A network of soil moisture sensors enable the continuous measurement of soil volumetric water content at 20 min time resolution at 9 sites. Additionally, opportunistic observations (soil moisture measurements and stream gauging) were performed during floods between 2012 and 2014. The data are appropriate for understanding rainfall variability, improving areal rainfall estimations and progress in distributed hydrological modelling.

  2. Analysis of aggregated tick returns: Evidence for anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Weber, Philipp

    2007-01-01

    In order to investigate the origin of large price fluctuations, we analyze stock price changes of ten frequently traded NASDAQ stocks in the year 2002. Though the influence of the trading frequency on the aggregate return in a certain time interval is important, it cannot alone explain the heavy-tailed distribution of stock price changes. For this reason, we analyze intervals with a fixed number of trades in order to eliminate the influence of the trading frequency and investigate the relevance of other factors for the aggregate return. We show that in tick time the price follows a discrete diffusion process with a variable step width while the difference between the number of steps in positive and negative direction in an interval is Gaussian distributed. The step width is given by the return due to a single trade and is long-term correlated in tick time. Hence, its mean value can well characterize an interval of many trades and turns out to be an important determinant for large aggregate returns. We also present a statistical model reproducing the cumulative distribution of aggregate returns. For an accurate agreement with the empirical distribution, we also take into account asymmetries of the step widths in different directions together with cross correlations between these asymmetries and the mean step width as well as the signs of the steps.

  3. Blepharoplasty markers: comparison of ink drying time and ink spread.

    PubMed

    Kim, Jenna M; Ehrlich, Michael S

    2017-04-01

    Marking of the eyelid is a crucial presurgical step in blepharoplasty. A number of markers are available for this purpose with variable ink characteristics. In this study, we measured the ink drying time and spread width of 13 markers used for preoperative marking for blepharoplasty. Based on the results, we propose markers that may be best suited for use in this procedure.

  4. Interrater reliability of videotaped observational gait-analysis assessments.

    PubMed

    Eastlack, M E; Arvidson, J; Snyder-Mackler, L; Danoff, J V; McGarvey, C L

    1991-06-01

    The purpose of this study was to determine the interrater reliability of videotaped observational gait-analysis (VOGA) assessments. Fifty-four licensed physical therapists with varying amounts of clinical experience served as raters. Three patients with rheumatoid arthritis who demonstrated an abnormal gait pattern served as subjects for the videotape. The raters analyzed each patient's most severely involved knee during the four subphases of stance for the kinematic variables of knee flexion and genu valgum. Raters were asked to determine whether these variables were inadequate, normal, or excessive. The temporospatial variables analyzed throughout the entire gait cycle were cadence, step length, stride length, stance time, and step width. Generalized kappa coefficients ranged from .11 to .52. Intraclass correlation coefficients (2,1) and (3,1) were slightly higher. Our results indicate that physical therapists' VOGA assessments are only slightly to moderately reliable and that improved interrater reliability of the assessments of physical therapists utilizing this technique is needed. Our data suggest that there is a need for greater standardization of gait-analysis training.

  5. From mess to mass: a methodology for calculating storm event pollutant loads with their uncertainties, from continuous raw data time series.

    PubMed

    Métadier, M; Bertrand-Krajewski, J-L

    2011-01-01

    With the increasing implementation of continuous monitoring of both discharge and water quality in sewer systems, large data bases are now available. In order to manage large amounts of data and calculate various variables and indicators of interest it is necessary to apply automated methods for data processing. This paper deals with the processing of short time step turbidity time series to estimate TSS (Total Suspended Solids) and COD (Chemical Oxygen Demand) event loads in sewer systems during storm events and their associated uncertainties. The following steps are described: (i) sensor calibration, (ii) estimation of data uncertainties, (iii) correction of raw data, (iv) data pre-validation tests, (v) final validation, and (vi) calculation of TSS and COD event loads and estimation of their uncertainties. These steps have been implemented in an integrated software tool. Examples of results are given for a set of 33 storm events monitored in a stormwater separate sewer system.

  6. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  7. Step-by-step variability of swing phase trajectory area during steady state walking at a range of speeds

    PubMed Central

    Hurt, Christopher P.; Brown, David A.

    2018-01-01

    Background Step kinematic variability has been characterized during gait using spatial and temporal kinematic characteristics. However, people can adopt different trajectory paths both between individuals and even within individuals at different speeds. Single point measures such as minimum toe clearance (MTC) and step length (SL) do not necessarily account for the multiple paths that the foot may take during the swing phase to reach the same foot fall endpoint. The purpose of this study was to test a step-by-step foot trajectory area (SBS-FTA) variability measure that is able to characterize sagittal plane foot trajectories of varying areas, and compare this measure against MTC and SL variability at different speeds. We hypothesize that the SBS-FTA variability would demonstrate increased variability with speed. Second, we hypothesize that SBS-FTA would have a stronger curvilinear fit compared with the CV and SD of SL and MTC. Third, we hypothesize SBS-FTA would be more responsive to change in the foot trajectory at a given speed compared to SL and MTC. Fourth, SBS-FTA variability would not strongly co-vary with SL and MTC variability measures since it represents a different construct related to foot trajectory area variability. Methods We studied 15 nonimpaired individuals during walking at progressively faster speeds. We calculated SL, MTC, and SBS-FTA area. Results SBS-FTA variability increased with speed, had a stronger curvilinear fit compared with the CV and SD of SL and MTC, was more responsive at a given speed, and did not strongly co-vary with SL and MTC variability measures. Conclusion SBS foot trajectory area variability was sensitive to change with faster speeds, captured a relationship that the majority of the other measures did not demonstrate, and did not co-vary strongly with other measures that are also components of the trajectory. PMID:29370202

  8. Infilling and quality checking of discharge, precipitation and temperature data using a copula based approach

    NASA Astrophysics Data System (ADS)

    Anwar, Faizan; Bárdossy, András; Seidel, Jochen

    2017-04-01

    Estimating missing values in a time series of a hydrological variable is an everyday task for a hydrologist. Existing methods such as inverse distance weighting, multivariate regression, and kriging, though simple to apply, provide no indication of the quality of the estimated value and depend mainly on the values of neighboring stations at a given step in the time series. Copulas have the advantage of representing the pure dependence structure between two or more variables (given the relationship between them is monotonic). They rid us of questions such as transforming the data before use or calculating functions that model the relationship between the considered variables. A copula-based approach is suggested to infill discharge, precipitation, and temperature data. As a first step the normal copula is used, subsequently, the necessity to use non-normal / non-symmetrical dependence is investigated. Discharge and temperature are treated as regular continuous variables and can be used without processing for infilling and quality checking. Due to the mixed distribution of precipitation values, it has to be treated differently. This is done by assigning a discrete probability to the zeros and treating the rest as a continuous distribution. Building on the work of others, along with infilling, the normal copula is also utilized to identify values in a time series that might be erroneous. This is done by treating the available value as missing, infilling it using the normal copula and checking if it lies within a confidence band (5 to 95% in our case) of the obtained conditional distribution. Hydrological data from two catchments Upper Neckar River (Germany) and Santa River (Peru) are used to demonstrate the application for datasets with different data quality. The Python code used here is also made available on GitHub. The required input is the time series of a given variable at different stations.

  9. A technique for estimating time of concentration and storage coefficient values for Illinois streams

    USGS Publications Warehouse

    Graf, Julia B.; Garklavs, George; Oberg, Kevin A.

    1982-01-01

    Values of the unit hydrograph parameters time of concentration (TC) and storage coefficient (R) can be estimated for streams in Illinois by a two-step technique developed from data for 98 gaged basins in the State. The sum of TC and R is related to stream length (L) and main channel slope (S) by the relation (TC + R)e = 35.2L0.39S-0.78. The variable R/(TC + R) is not significantly correlated with drainage area, slope, or length, but does exhibit a regional trend. Regional values of R/(TC + R) are used with the computed values of (TC + R)e to solve for estimated values of time of concentration (TCe) and storage coefficient (Re). The use of the variable R/(TC + R) is thought to account for variations in unit hydrograph parameters caused by physiographic variables such as basin topography, flood-plain development, and basin storage characteristics. (USGS)

  10. Automated margin analysis of contemporary adhesive systems in vitro: evaluation of discriminatory variables.

    PubMed

    Heintze, Siegward D; Forjanic, Monika; Roulet, François-Jean

    2007-08-01

    Using an optical sensor, to automatically evaluate the marginal seal of restorations placed with 21 adhesive systems of all four adhesive categories in cylindrical cavities of bovine dentin applying different outcome variables, and to evaluate their discriminatory power. Twenty-one adhesive systems were evaluated: three 3-step etch-and-rinse systems, three 2-step etch-and-rinse systems, five 2-step self-etching systems, and ten 1-step self-etching systems. All adhesives were applied in cylindrical cavities in bovine dentin together with Tetric Ceram (n=8). In the control group, no adhesive system was used. After 24 h of storage in water at 37 degrees C, the surface was polished with 4000-grit SiC paper, and epoxy resin replicas were produced. An optical sensor (FRT MicroProf) created 100 profiles of the restoration margin, and an algorithm detected gaps and calculated their depths and widths. The following evaluation criteria were used: percentage of specimens without gaps, the percentage of gap-free profiles in relation to all profiles per specimen, mean gap width, mean gap depth, largest gap, modified marginal integrity index MI. The statistical analysis was carried out on log-transformed data for all variables with ANOVA and post-hoc Tukey's test for multiple comparisons. The correlation between the variables was tested with regression analysis, and the pooled data accordingto the four adhesive categories were compared by applying the Mann-Whitney nonparametric test (p < 0.05). For all the variables that characterized the marginal adaptation, there was a great variation from material to material. In general, the etch-and-rinse adhesive systems demonstrated the best marginal adaptation, followed by the 2-step self-etching and the 1-step self-etching adhesives; the latter showed the highest variability in test results between materials and within the same material. The only exception to this rule was Xeno IV, which showed a marginal adaptation that was comparable to that of the best 3-step etch-and-rinse systems. Except for the variables "largest gap" and "mean gap depth", all the other variables had a similar ability to discriminate between materials. Pooled data according to the four adhesive categories revealed statistically significant differences between the one-step self-etching systems and the other three systems as well as between two-step self-etching and three-step etch-and-rinse systems. With one exception, the one-step self-etching systems yielded the poorest marginal adaptation results and the highest variability between materials and within the same material. Except for the variable "largest gap", the percentage of continuous margin, mean gap width, mean gap depth, and the marginal integrity index MI were closely related to one another and showed--with the exception of "mean gap depth"--similar discriminatory power.

  11. The Screening Tool of Feeding Problems Applied to Children (STEP-CHILD): Psychometric Characteristics and Associations with Child and Parent Variables

    ERIC Educational Resources Information Center

    Seiverling, Laura; Hendy, Helen M.; Williams, Keith

    2011-01-01

    The present study evaluated the 23-item Screening Tool for Feeding Problems (STEP; Matson & Kuhn, 2001) with a sample of children referred to a hospital-based feeding clinic to examine the scale's psychometric characteristics and then demonstrate how a children's revision of the STEP, the STEP-CHILD is associated with child and parent variables.…

  12. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  13. Incompressible spectral-element method: Derivation of equations

    NASA Technical Reports Server (NTRS)

    Deanna, Russell G.

    1993-01-01

    A fractional-step splitting scheme breaks the full Navier-Stokes equations into explicit and implicit portions amenable to the calculus of variations. Beginning with the functional forms of the Poisson and Helmholtz equations, we substitute finite expansion series for the dependent variables and derive the matrix equations for the unknown expansion coefficients. This method employs a new splitting scheme which differs from conventional three-step (nonlinear, pressure, viscous) schemes. The nonlinear step appears in the conventional, explicit manner, the difference occurs in the pressure step. Instead of solving for the pressure gradient using the nonlinear velocity, we add the viscous portion of the Navier-Stokes equation from the previous time step to the velocity before solving for the pressure gradient. By combining this 'predicted' pressure gradient with the nonlinear velocity in an explicit term, and the Crank-Nicholson method for the viscous terms, we develop a Helmholtz equation for the final velocity.

  14. The Winds of B Supergiants

    NASA Technical Reports Server (NTRS)

    Fullerton, A. W.; Massa, D. L.; Prinja, R. K.; Owocki, S. P.; Cranmer, S. R.

    1998-01-01

    This report summarizes the progress of the work conducted under the program "The Winds of B Supergiants," conducted by Raytheon STX Corporation. The report consists of a journal article "Wind variability in B supergiants III. Corotating spiral structures in the stellar wind of HD 64760." The first step in the project was the analysis of the 1996 time series of 2 B supergiants and an O star. These data were analyzed and reported on at the ESO workshop, "Cyclical Variability in Stellar Winds."

  15. Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem

    NASA Astrophysics Data System (ADS)

    Minesaki, Yukitaka

    2018-04-01

    We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.

  16. Muscle Activation Patterns in Infants with Myelomeningocele Stepping on a Treadmill

    PubMed Central

    Sansom, Jennifer K.; Teulier, Caroline; Smith, Beth A.; Moerchen, Victoria; Muraszko, Karin; Ulrich, Beverly D.

    2013-01-01

    Purpose To characterize how infants with myelomeningocele (MMC) activate lower limb muscles over the first year of life, without practice, while stepping on a motorized treadmill. Methods Twelve infants with MMC were tested longitudinally at 1, 6, 12 months. Electromyography (EMG) was used to collect data from the tibialis anterior (TA), lateral gastrocnemius (LG), rectus femoris (RF), biceps femoris (BF). Results Across the first year, infants showed no EMG activity for ~50% of the stride cycle w/poor rhythmicity and timing of muscles, when activated. Single muscle activation predominated; agonist-antagonist co-activation was low. Probability of individual muscle activity across the stride decreased w/age. Conclusions Infants with MMC show high variability in timing and duration of muscle activity, few complex combinations, and very little change over time. PMID:23685739

  17. Innovative method and equipment for personalized ventilation.

    PubMed

    Kalmár, F

    2015-06-01

    At the University of Debrecen, a new method and equipment for personalized ventilation has been developed. This equipment makes it possible to change the airflow direction during operation with a time frequency chosen by the user. The developed office desk with integrated air ducts and control system permits ventilation with 100% outdoor air, 100% recirculated air, or a mix of outdoor and recirculated air in a relative proportion set by the user. It was shown that better comfort can be assured in hot environments if the fresh airflow direction is variable. Analyzing the time step of airflow direction changing, it was found that women prefer smaller time steps and their votes related to thermal comfort sensation are higher than men's votes. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Round-robin comparison of methods for the detection of human enteric viruses in lettuce.

    PubMed

    Le Guyader, Françoise S; Schultz, Anna-Charlotte; Haugarreau, Larissa; Croci, Luciana; Maunula, Leena; Duizer, Erwin; Lodder-Verschoor, Froukje; von Bonsdorff, Carl-Henrik; Suffredini, Elizabetha; van der Poel, Wim M M; Reymundo, Rosanna; Koopmans, Marion

    2004-10-01

    Five methods that detect human enteric virus contamination in lettuce were compared. To mimic multiple contaminations as observed after sewage contamination, artificial contamination was with human calicivirus and poliovirus and animal calicivirus strains at different concentrations. Nucleic acid extractions were done at the same time in the same laboratory to reduce assay-to-assay variability. Results showed that the two critical steps are the washing step and removal of inhibitors. The more reliable methods (sensitivity, simplicity, low cost) included an elution/concentration step and a commercial kit. Such development of sensitive methods for viral detection in foods other than shellfish is important to improve food safety.

  19. Oceanic and atmospheric conditions associated with the pentad rainfall over the southeastern peninsular India during the North-East Indian Monsoon season

    NASA Astrophysics Data System (ADS)

    Shanmugasundaram, Jothiganesh; Lee, Eungul

    2018-03-01

    The association of North-East Indian Monsoon rainfall (NEIMR) over the southeastern peninsular India with the oceanic and atmospheric conditions over the adjacent ocean regions at pentad time step (five days period) was investigated during the months of October to December for the period 1985-2014. The non-parametric correlation and composite analyses were carried out for the simultaneous and lagged time steps (up to four lags) of oceanic and atmospheric variables with pentad NEIMR. The results indicated that NEIMR was significantly correlated: 1) positively with both sea surface temperature (SST) led by 1-4 pentads (lag 1-4 time steps) and latent heat flux (LHF) during the simultaneous, lag 1 and 2 time steps over the equatorial western Indian Ocean, 2) positively with SST but negatively with LHF (less heat flux from ocean to atmosphere) during the same and all the lagged time steps over the Bay of Bengal. Consistently, during the wet NEIMR pentads over the southeastern peninsular India, SST significantly increased over the Bay of Bengal during all the time steps and the equatorial western Indian Ocean during the lag 2-4 time steps, while the LHF decreased over the Bay of Bengal (all time steps) and increased over the Indian Ocean (same, lag 1 and 2). The investigation on ocean-atmospheric interaction revealed that the enhanced LHF over the equatorial western Indian Ocean was related to increased atmospheric moisture demand and increased wind speed, whereas the reduced LHF over the Bay of Bengal was associated with decreased atmospheric moisture demand and decreased wind speed. The vertically integrated moisture flux and moisture transport vectors from 1000 to 850 hPa exhibited that the moisture was carried away from the equatorial western Indian Ocean to the strong moisture convergence regions of the Bay of Bengal during the same and lag 1 time steps of wet NEIMR pentads. Further, the moisture over the Bay of Bengal was transported to the southeastern peninsular India through stronger cyclonic circulations, which were confirmed by the moisture transport vectors and positive vorticity. The identified ocean and atmosphere processes, associated with the wet NEIMR conditions, could be a valuable scientific input for enhancing the rainfall predictability, which has a huge socioeconomic value to agriculture and water resource management sectors in the southeastern peninsular India.

  20. Intraindividual Stepping Reaction Time Variability Predicts Falls in Older Adults With Mild Cognitive Impairment.

    PubMed

    Bunce, David; Haynes, Becky I; Lord, Stephen R; Gschwind, Yves J; Kochan, Nicole A; Reppermund, Simone; Brodaty, Henry; Sachdev, Perminder S; Delbaere, Kim

    2017-06-01

    Reaction time measures have considerable potential to aid neuropsychological assessment in a variety of health care settings. One such measure, the intraindividual reaction time variability (IIV), is of particular interest as it is thought to reflect neurobiological disturbance. IIV is associated with a variety of age-related neurological disorders, as well as gait impairment and future falls in older adults. However, although persons diagnosed with Mild Cognitive Impairment (MCI) are at high risk of falling, the association between IIV and prospective falls is unknown. We conducted a longitudinal cohort study in cognitively intact (n = 271) and MCI (n = 154) community-dwelling adults aged 70-90 years. IIV was assessed through a variety of measures including simple and choice hand reaction time and choice stepping reaction time tasks (CSRT), the latter administered as a single task and also with a secondary working memory task. Logistic regression did not show an association between IIV on the hand-held tasks and falls. Greater IIV in both CSRT tasks, however, did significantly increase the risk of future falls. This effect was specific to the MCI group, with a stronger effect in persons exhibiting gait, posture, or physiological impairment. The findings suggest that increased stepping IIV may indicate compromised neural circuitry involved in executive function, gait, and posture in persons with MCI increasing their risk of falling. IIV measures have potential to assess neurobiological disturbance underlying physical and cognitive dysfunction in old age, and aid fall risk assessment and routine care in community and health care settings. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Pharmacodynamics of Voriconazole in Children: Further Steps along the Path to True Individualized Therapy

    PubMed Central

    Huurneman, Luc J.; Neely, Michael; Veringa, Anette; Docobo Pérez, Fernando; Ramos-Martin, Virginia; Tissing, Wim J.; Alffenaar, Jan-Willem C.

    2016-01-01

    Voriconazole is the agent of choice for the treatment of invasive aspergillosis in children at least 2 years of age. The galactomannan index is a routinely used diagnostic marker for invasive aspergillosis and can be useful for following the clinical response to antifungal treatment. The aim of this study was to develop a pharmacokinetic-pharmacodynamic (PK-PD) mathematical model that links the pharmacokinetics of voriconazole with the galactomannan readout in children. Twelve children receiving voriconazole for treatment of proven, probable, and possible invasive fungal infections were studied. A previously published population PK model was used as the Bayesian prior. The PK-PD model was used to estimate the average area under the concentration-time curve (AUC) in each patient and the resultant galactomannan-time profile. The relationship between the ratio of the AUC to the concentration of voriconazole that induced half maximal killing (AUC/EC50) and the terminal galactomannan level was determined. The voriconazole concentration-time and galactomannan-time profiles were both highly variable. Despite this variability, the fit of the PK-PD model was good, enabling both the pharmacokinetics and pharmacodynamics to be described in individual children. (AUC/EC50)/15.4 predicted terminal galactomannan (P = 0.003), and a ratio of >6 suggested a lower terminal galactomannan level (P = 0.07). The construction of linked PK-PD models is the first step in developing control software that enables not only individualized voriconazole dosages but also individualized concentration targets to achieve suppression of galactomannan levels in a timely and optimally precise manner. Controlling galactomannan levels is a first critical step to maximizing clinical response and survival. PMID:26833158

  2. TRUST84. Sat-Unsat Flow in Deformable Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narasimhan, T.N.

    1984-11-01

    TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less

  3. A new one-step procedure for pulmonary valve implantation of the melody valve: Simultaneous prestenting and valve implantation.

    PubMed

    Boudjemline, Younes

    2018-01-01

    To describe a new modification, the one-step procedure, that allows interventionists to pre-stent and implant a Melody valve simultaneously. Percutaneous pulmonary valve implantation (PPVI) is the standard of care for managing patients with dysfunctional right ventricular outflow tract, and the approach is standardized. Patients undergoing PPVI using the one-step procedure were identified in our database. Procedural data and radiation exposure were compared to those in a matched group of patients who underwent PPVI using the conventional two-step procedure. Between January 2016 and January 2017, PPVI was performed in 27 patients (median age/range, 19.1/10-55 years) using the one-step procedure involving manual crimping of one to three bare metal stents over the Melody valve. The stent and Melody valve were delivered successfully using the Ensemble delivery system. No complications occurred. All patients had excellent hemodynamic results (median/range post-PPVI right ventricular to pulmonary artery gradient, 9/0-20 mmHg). Valve function was excellent. Median procedural and fluoroscopic times were 56 and 10.2 min, respectively, which significantly differed from those of the two-step procedure group. Similarly, the dose area product (DAP), and radiation time were statistically lower in the one-step group than in the two-step group (P < 0.001 for all variables). After a median follow-up of 8 months (range, 3-14.7), no patient underwent reintervention, and no device dysfunction was observed. The one-step procedure is a safe modification that allows interventionists to prestent and implants the Melody valve simultaneously. It significantly reduces procedural and fluoroscopic times, and radiation exposure. © 2017 Wiley Periodicals, Inc.

  4. Remote sensing of desert dust aerosols over the Sahel : potential use for health impact studies

    NASA Astrophysics Data System (ADS)

    Deroubaix, A. D.; Martiny, N. M.; Chiapello, I. C.; Marticorena, B. M.

    2012-04-01

    Since the end of the 70's, remote sensing monitors the desert dust aerosols due to their absorption and scattering properties and allows to make long time series which are necessary for air quality or health impact studies. In the Sahel, a huge health problem is the Meningitis Meningococcal (MM) epidemics that occur during the dry season : the dust has been suspected to be crucial to understand their onsets and dynamics. The Aerosol absorption Index (AI) is a semi-quantitative index derived from TOMS and OMI observations in the UV available at a spatial resolution of 1° (1979-2005) and 0.25° (2005-today) respectively. The comparison of the OMI-AI and AERONET Aerosol Optical thickness (AOT) shows a good agreement at a daily time-step (correlation ~0.7). The comparison of the OMI-AI with the Particle Matter (PM) measurement of the Sahelian Dust Transect is lower (~0.4) at a daily time-step but it increases at a weekly time-step (~0.6). The OMI-AI reproduces the dust seasonal cycle over the Sahel and we conclude that the OMI-AI product at a 0.25° spatial resolution is suitable for health impact studies, especially at a weekly epidemiological time-step. Despite the AI is sensitive to the aerosol altitude, it provides a daily spatial information on dust. A preliminary investigation analysis of the link between weekly OMI AI and weekly WHO epidemiological data sets is presented in Mali and Niger, showing a good agreement between the AI and the onset of the MM epidemics with a constant lag (between 1 and 2 week). The next of this study is to analyse a deeper AI time series constituted by TOMS and OMI data sets. Based on the weekly ratios PM/AI at 2 stations of the Sahelian Dust Transect, a spatialized proxy for PM from the AI has been developed. The AI as a proxy for PM and other climate variables such as Temperature (T°), Relative Humidity (RH%) and the wind (intensity and direction) could then be used to analyze the link between those variables and the MM epidemics in the most concerned countries in Western Africa, which would be an important step towards a forecasting tool for the epidemics risks in Western Africa.

  5. Investigation of dairy cattle ease of movement on new methyl methacrylate resin aggregate floorings.

    PubMed

    Franco-Gendron, N; Bergeron, R; Curilla, W; Conte, S; DeVries, T; Vasseur, E

    2016-10-01

    Freestall dairy farms commonly present issues with cattle slips and falls caused by smooth flooring and manure slurry. This study examined the effect of 4 new methyl methacrylate (MMA) resin aggregate flooring types (1-4) compared with rubber (positive) and concrete (negative control) on dairy cow (n=18) ease of movement when walking on straight and right-angled corridors. Our hypothesis was that cow ease of movement when walking on the MMA surfaces would be better than when walking on traction milled concrete, and at least as good as when walking on rubber. Cattle ease of movement was measured using kinematics, accelerometers, and visual observation of gait and associated behaviors. Stride length, swing time, stance time, and hoof height were obtained from kinematic evaluation. Acceleration and asymmetry of variance were measured with accelerometers. Locomotion score and behaviors associated with lameness, such as arch back, head bob, tracking up, step asymmetry, and reluctance to bear weight were visually observed. Stride length, swing time, stance time, and the number of steps taken were the only variables affected by flooring type. Differences between flooring types for these variables were tested using a generalized linear mixed model with cow as a random effect, week as a random block factor, and flooring type as a fixed effect. Multiple comparisons with a Scheffé adjustment were done to analyze differences among flooring types. Stride length was 0.14 m longer (better) on rubber when compared with concrete, and 0.11 and 0.17 m shorter on MMA 1 and 2 compared with rubber. On MMA 3 and 4, stride length did not differ from either rubber or concrete. Swing time was 0.04 s shorter (worse) on MMA 1 than on rubber, but did not differ from any other flooring. Stance time was 0.18 s longer (worse) on MMA 2 when compared with rubber, but it did not differ from any other treatment. The number of steps was higher on MMA 4 compared with rubber (4.57 vs. 3.95 steps), but did not differ from any other treatment. Of all the MMA floors tested, MMA 3 was the only one that was consistently as good as rubber (positive control). All 4 MMA floors never differed from concrete (negative control) in any of the ease of movement variables measured. These results suggest that MMA 3 may improve cow ease of movement, compared with the other MMA floors, but more research is required to confirm these findings. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Time-Referenced Effects of an Internal vs. External Focus of Attention on Muscular Activity and Compensatory Variability

    PubMed Central

    Hossner, Ernst-Joachim; Ehrlenspiel, Felix

    2010-01-01

    The paralysis-by-analysis phenomenon, i.e., attending to the execution of one's movement impairs performance, has gathered a lot of attention over recent years (see Wulf, 2007, for a review). Explanations of this phenomenon, e.g., the hypotheses of constrained action (Wulf et al., 2001) or of step-by-step execution (Masters, 1992; Beilock et al., 2002), however, do not refer to the level of underlying mechanisms on the level of sensorimotor control. For this purpose, a “nodal-point hypothesis” is presented here with the core assumption that skilled motor behavior is internally based on sensorimotor chains of nodal points, that attending to intermediate nodal points leads to a muscular re-freezing of the motor system at exactly and exclusively these points in time, and that this re-freezing is accompanied by the disruption of compensatory processes, resulting in an overall decrease of motor performance. Two experiments, on lever sequencing and basketball free throws, respectively, are reported that successfully tested these time-referenced predictions, i.e., showing that muscular activity is selectively increased and compensatory variability selectively decreased at movement-related nodal points if these points are in the focus of attention. PMID:21833285

  7. The way from microscopic many-particle theory to macroscopic hydrodynamics.

    PubMed

    Haussmann, Rudolf

    2016-03-23

    Starting from the microscopic description of a normal fluid in terms of any kind of local interacting many-particle theory we present a well defined step by step procedure to derive the hydrodynamic equations for the macroscopic phenomena. We specify the densities of the conserved quantities as the relevant hydrodynamic variables and apply the methods of non-equilibrium statistical mechanics with projection operator techniques. As a result we obtain time-evolution equations for the hydrodynamic variables with three kinds of terms on the right-hand sides: reversible, dissipative and fluctuating terms. In their original form these equations are completely exact and contain nonlocal terms in space and time which describe nonlocal memory effects. Applying a few approximations the nonlocal properties and the memory effects are removed. As a result we find the well known hydrodynamic equations of a normal fluid with Gaussian fluctuating forces. In the following we investigate if and how the time-inversion invariance is broken and how the second law of thermodynamics comes about. Furthermore, we show that the hydrodynamic equations with fluctuating forces are equivalent to stochastic Langevin equations and the related Fokker-Planck equation. Finally, we investigate the fluctuation theorem and find a modification by an additional term.

  8. Trend assessment: applications for hydrology and climate research

    NASA Astrophysics Data System (ADS)

    Kallache, M.; Rust, H. W.; Kropp, J.

    2005-02-01

    The assessment of trends in climatology and hydrology still is a matter of debate. Capturing typical properties of time series, like trends, is highly relevant for the discussion of potential impacts of global warming or flood occurrences. It provides indicators for the separation of anthropogenic signals and natural forcing factors by distinguishing between deterministic trends and stochastic variability. In this contribution river run-off data from gauges in Southern Germany are analysed regarding their trend behaviour by combining a deterministic trend component and a stochastic model part in a semi-parametric approach. In this way the trade-off between trend and autocorrelation structure can be considered explicitly. A test for a significant trend is introduced via three steps: First, a stochastic fractional ARIMA model, which is able to reproduce short-term as well as long-term correlations, is fitted to the empirical data. In a second step, wavelet analysis is used to separate the variability of small and large time-scales assuming that the trend component is part of the latter. Finally, a comparison of the overall variability to that restricted to small scales results in a test for a trend. The extraction of the large-scale behaviour by wavelet analysis provides a clue concerning the shape of the trend.

  9. Computed tear film and osmolarity dynamics on an eye-shaped domain

    PubMed Central

    Li, Longfei; Braun, Richard J.; Driscoll, Tobin A.; Henshaw, William D.; Banks, Jeffrey W.; King-Smith, P. Ewen

    2016-01-01

    The concentration of ions, or osmolarity, in the tear film is a key variable in understanding dry eye symptoms and disease. In this manuscript, we derive a mathematical model that couples osmolarity (treated as a single solute) and fluid dynamics within the tear film on a 2D eye-shaped domain. The model includes the physical effects of evaporation, surface tension, viscosity, ocular surface wettability, osmolarity, osmosis and tear fluid supply and drainage. The governing system of coupled non-linear partial differential equations is solved using the Overture computational framework, together with a hybrid time-stepping scheme, using a variable step backward differentiation formula and a Runge–Kutta–Chebyshev method that were added to the framework. The results of our numerical simulations provide new insight into the osmolarity distribution over the ocular surface during the interblink. PMID:25883248

  10. Possible effects of anthropogenically-increased CO[sub 2] on the dynamics of climate: Implications for ice age cycles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saltzman, B.; Maasch, K.A.; Verbitsky, M.Ya.

    1993-06-07

    The authors look at the impact of an antropogenic step increase in atmospheric carbon dioxide content on a dynamic model designed to look at long-term variations in climate. The model is one developed by Saltzman and Maasch, and Saltzman and Verbitsky, where four slow responding variables are considered to carry the climatic change information over the past 5 My. One of these variables is the carbon dioxide concentration in the atmosphere. If this step increase is maintained over a long period of time, what impact does this have of the present unstable regime where climate oscillates through ice age periodsmore » Indications are that the climate shifts to a regime where the oscillations are much weaker than those which prevailed during the Pleistocene.« less

  11. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  12. Evidence for a Time-Invariant Phase Variable in Human Ankle Control

    PubMed Central

    Gregg, Robert D.; Rouse, Elliott J.; Hargrove, Levi J.; Sensinger, Jonathon W.

    2014-01-01

    Human locomotion is a rhythmic task in which patterns of muscle activity are modulated by state-dependent feedback to accommodate perturbations. Two popular theories have been proposed for the underlying embodiment of phase in the human pattern generator: a time-dependent internal representation or a time-invariant feedback representation (i.e., reflex mechanisms). In either case the neuromuscular system must update or represent the phase of locomotor patterns based on the system state, which can include measurements of hundreds of variables. However, a much simpler representation of phase has emerged in recent designs for legged robots, which control joint patterns as functions of a single monotonic mechanical variable, termed a phase variable. We propose that human joint patterns may similarly depend on a physical phase variable, specifically the heel-to-toe movement of the Center of Pressure under the foot. We found that when the ankle is unexpectedly rotated to a position it would have encountered later in the step, the Center of Pressure also shifts forward to the corresponding later position, and the remaining portion of the gait pattern ensues. This phase shift suggests that the progression of the stance ankle is controlled by a biomechanical phase variable, motivating future investigations of phase variables in human locomotor control. PMID:24558485

  13. Sleep Duration, Sedentary Behavior, Physical Activity, and Quality of Life after Inpatient Stroke Rehabilitation.

    PubMed

    Ezeugwu, Victor E; Manns, Patricia J

    2017-09-01

    The aim of this study was to describe accelerometer-derived sleep duration, sedentary behavior, physical activity, and quality of life and their association with demographic and clinical factors within the first month after inpatient stroke rehabilitation. Thirty people with stroke (mean ± standard deviation, age: 63.8 ± 12.3 years, time since stroke: 3.6 ± 1.1 months) wore an activPAL3 Micro accelerometer (PAL Technologies, Glasgow, Scotland) continuously for 7 days to measure whole-day activity behavior. The Stroke Impact Scale and the Functional Independence Measure were used to assess quality of life and function, respectively. Sleep duration ranged from 6.6 to 11.6 hours/day. Fifteen participants engaged in long sleep greater than 9 hours/day. Participants spent 74.8% of waking hours in sedentary behavior, 17.9% standing, and 7.3% stepping. Of stepping time, only a median of 1.1 (interquartile range: .3-5.8) minutes were spent walking at a moderate-to-vigorous intensity (≥100 steps/minute). The time spent sedentary, the stepping time, and the number of steps differed significantly by the hemiparetic side (P < .05), but not by sex or the type of stroke. There were moderate to strong correlations between the stepping time and the number of steps with gait speed (Spearman r = .49 and .61 respectively, P < .01). Correlations between accelerometer-derived variables and age, time since stroke, and cognition were not significant. People with stroke sleep for longer than the normal duration, spend about three quarters of their waking hours in sedentary behaviors, and engage in minimal walking following stroke rehabilitation. Our findings provide a rationale for the development of behavior change strategies after stroke. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  14. Internal Wave Impact on the Performance of a Hypothetical Mine Hunting Sonar

    DTIC Science & Technology

    2014-10-01

    time steps) to simulate the propagation of the internal wave field through the mine field. Again the transmission loss and acoustic signal strength...dependent internal wave perturbed sound speed profile was evaluated by calculating the temporal variability of the signal excess (SE) of acoustic...internal wave perturbation of the sound speed profile, was calculated for a limited sound speed field time section. Acoustic signals were projected

  15. Selecting predictors for discriminant analysis of species performance: an example from an amphibious softwater plant.

    PubMed

    Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M

    2012-03-01

    Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.

  16. Contribution of lower limb eccentric work and different step responses to balance recovery among older adults.

    PubMed

    Nagano, Hanatsu; Levinger, Pazit; Downie, Calum; Hayes, Alan; Begg, Rezaul

    2015-09-01

    Falls during walking reflect susceptibility to balance loss and the individual's capacity to recover stability. Balance can be recovered using either one step or multiple steps but both responses are impaired with ageing. To investigate older adults' (n=15, 72.5±4.8 yrs) recovery step control a tether-release procedure was devised to induce unanticipated forward balance loss. Three-dimensional position-time data combined with foot-ground reaction forces were used to measure balance recovery. Dependent variables were; margin of stability (MoS) and available response time (ART) for spatial and temporal balance measures in the transverse and sagittal planes; lower limb joint angles and joint negative/positive work; and spatio-temporal gait parameters. Relative to multi-step responses, single-step recovery was more effective in maintaining balance, indicated by greater MoS and longer ART. MoS in the sagittal plane measure and ART in the transverse plane distinguished single step responses from multiple steps. When MoS and ART were negative (<0), balance was not secured and additional steps would be required to establish the new base of support for balance recovery. Single-step responses demonstrated greater step length and velocity and when the recovery foot landed, greater centre of mass downward velocity. Single-step strategies also showed greater ankle dorsiflexion, increased knee maximum flexion and more negative work at the ankle and knee. Collectively these findings suggest that single-step responses are more effective in forward balance recovery by directing falling momentum downward to be absorbed as lower limb eccentric work. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Generating linear regression model to predict motor functions by use of laser range finder during TUG.

    PubMed

    Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki

    2017-05-01

    The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P < 0.01), which was not significant higher correlation than TUG test time. We showed which TUG test parameters were associated with each motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  18. A Conformational Transition in the Myosin VI Converter Contributes to the Variable Step Size

    PubMed Central

    Ovchinnikov, V.; Cecchini, M.; Vanden-Eijnden, E.; Karplus, M.

    2011-01-01

    Myosin VI (MVI) is a dimeric molecular motor that translocates backwards on actin filaments with a surprisingly large and variable step size, given its short lever arm. A recent x-ray structure of MVI indicates that the large step size can be explained in part by a novel conformation of the converter subdomain in the prepowerstroke state, in which a 53-residue insert, unique to MVI, reorients the lever arm nearly parallel to the actin filament. To determine whether the existence of the novel converter conformation could contribute to the step-size variability, we used a path-based free-energy simulation tool, the string method, to show that there is a small free-energy difference between the novel converter conformation and the conventional conformation found in other myosins. This result suggests that MVI can bind to actin with the converter in either conformation. Models of MVI/MV chimeric dimers show that the variability in the tilting angle of the lever arm that results from the two converter conformations can lead to step-size variations of ∼12 nm. These variations, in combination with other proposed mechanisms, could explain the experimentally determined step-size variability of ∼25 nm for wild-type MVI. Mutations to test the findings by experiment are suggested. PMID:22098742

  19. The Use of Lean Six Sigma Methodology in Increasing Capacity of a Chemical Production Facility at DSM.

    PubMed

    Meeuwse, Marco

    2018-03-30

    Lean Six Sigma is an improvement method, combining Lean, which focuses on removing 'waste' from a process, with Six Sigma, which is a data-driven approach, making use of statistical tools. Traditionally it is used to improve the quality of products (reducing defects), or processes (reducing variability). However, it can also be used as a tool to increase the productivity or capacity of a production plant. The Lean Six Sigma methodology is therefore an important pillar of continuous improvement within DSM. In the example shown here a multistep batch process is improved, by analyzing the duration of the relevant process steps, and optimizing the procedures. Process steps were performed in parallel instead of sequential, and some steps were made shorter. The variability was reduced, giving the opportunity to make a tighter planning, and thereby reducing waiting times. Without any investment in new equipment or technical modifications, the productivity of the plant was improved by more than 20%; only by changing procedures and the programming of the process control system.

  20. Basketball lay-up - foot loading characteristics and the number of trials necessary to obtain stable plantar pressure variables.

    PubMed

    Chua, YaoHui K; Quek, Raymond K K; Kong, Pui W

    2017-03-01

    This study aimed (1) to profile the plantar loading characteristics when performing the basketball lay-up in a realistic setting and (2) to determine the number of trials necessary to establish a stable mean for plantar loading variables during the lay-up. Thirteen university male basketball players [age: 23.0 (1.4) years, height: 1.75 (0.05) m, mass: 68.4 (8.6) kg] performed ten successful basketball lay-ups from a stationary position. Plantar loading variables were recorded using the Novel Pedar-X in-shoe system. Loading variables including peak force, peak pressure, and pressure-time integral were extracted from eight foot regions. Performance stability of plantar loading variables during the take-off and landing steps were assessed using the sequential averaging technique and intra-class correlation coefficient (ICC). High plantar loadings were experienced at the heel during the take-off steps, and both the heel and forefoot regions upon landing. The sequential estimation technique revealed a five-eight trial range to achieve a stable mean across all plantar loading variables, whereas ICC analysis was insensitive to inter-trial differences of repeated lay-up performances. Future studies and performance evaluation protocols on plantar loading during basketball lay-ups should include at least eight trials to ensure that the measurements obtained are sufficiently stable.

  1. Level walking in adults with and without Developmental Coordination Disorder: An analysis of movement variability.

    PubMed

    Du, Wenchong; Wilmut, Kate; Barnett, Anna L

    2015-10-01

    Several studies have shown that Developmental Coordination Disorder (DCD) is a condition that continues beyond childhood. Although adults with DCD report difficulties with dynamic balance, as well as frequent tripping and bumping into objects, there have been no specific studies on walking in this population. Some previous work has focused on walking in children with DCD but variation in the tasks and measures used has led to inconsistent findings. The aim of the current study therefore was to examine the characteristics of level walking in adults with and without DCD. Fifteen adults with DCD and 15 typically developing (TD) controls walked barefoot at a natural pace up and down an 11 m walkway for one minute. Foot placement measures and velocity and acceleration of the body were recorded, as well as measures of movement variability. The adults with DCD showed similar gait patterns to the TD group in terms of step length, step width, double support time and stride time. The DCD group also showed similar velocity and acceleration to the TD group in the medio-lateral, anterior-posterior and vertical direction. However, the DCD group exhibited greater variability in all foot placement and some body movement measures. The finding that adults with DCD have a reduced ability to produce consistent movement patterns is discussed in relation to postural control limitations and compared to variability of walking measures found in elderly populations. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Finite element computation of a viscous compressible free shear flow governed by the time dependent Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.; Blanchard, D. K.

    1975-01-01

    A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.

  3. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  4. Modelling fatigue and the use of fatigue models in work settings.

    PubMed

    Dawson, Drew; Ian Noy, Y; Härmä, Mikko; Akerstedt, Torbjorn; Belenky, Gregory

    2011-03-01

    In recent years, theoretical models of the sleep and circadian system developed in laboratory settings have been adapted to predict fatigue and, by inference, performance. This is typically done using the timing of prior sleep and waking or working hours as the primary input and the time course of the predicted variables as the primary output. The aim of these models is to provide employers, unions and regulators with quantitative information on the likely average level of fatigue, or risk, associated with a given pattern of work and sleep with the goal of better managing the risk of fatigue-related errors and accidents/incidents. The first part of this review summarises the variables known to influence workplace fatigue and draws attention to the considerable variability attributable to individual and task variables not included in current models. The second part reviews the current fatigue models described in the scientific and technical literature and classifies them according to whether they predict fatigue directly by using the timing of prior sleep and wake (one-step models) or indirectly by using work schedules to infer an average sleep-wake pattern that is then used to predict fatigue (two-step models). The third part of the review looks at the current use of fatigue models in field settings by organizations and regulators. Given their limitations it is suggested that the current generation of models may be appropriate for use as one element in a fatigue risk management system. The final section of the review looks at the future of these models and recommends a standardised approach for their use as an element of the 'defenses-in-depth' approach to fatigue risk management. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Flex Fuel Optimized SI and HCCI Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Guoming; Schock, Harold; Yang, Xiaojian

    The central objective of the proposed work is to demonstrate an HCCI (homogeneous charge compression ignition) capable SI (spark ignited) engine that is capable of fast and smooth mode transition between SI and HCCI combustion modes. The model-based control technique was used to develop and validate the proposed control strategy for the fast and smooth combustion mode transition based upon the developed control-oriented engine; and an HCCI capable SI engine was designed and constructed using production ready two-step valve-train with electrical variable valve timing actuating system. Finally, smooth combustion mode transition was demonstrated on a metal engine within eight enginemore » cycles. The Chrysler turbocharged 2.0L I4 direct injection engine was selected as the base engine for the project and the engine was modified to fit the two-step valve with electrical variable valve timing actuating system. To develop the model-based control strategy for stable HCCI combustion and smooth combustion mode transition between SI and HCCI combustion, a control-oriented real-time engine model was developed and implemented into the MSU HIL (hardware-in-the-loop) simulation environment. The developed model was used to study the engine actuating system requirement for the smooth and fast combustion mode transition and to develop the proposed mode transition control strategy. Finally, a single cylinder optical engine was designed and fabricated for studying the HCCI combustion characteristics. Optical engine combustion tests were conducted in both SI and HCCI combustion modes and the test results were used to calibrate the developed control-oriented engine model. Intensive GT-Power simulations were conducted to determine the optimal valve lift (high and low) and the cam phasing range. Delphi was selected to be the supplier for the two-step valve-train and Denso to be the electrical variable valve timing system supplier. A test bench was constructed to develop control strategies for the electrical variable valve timing (VVT) actuating system and satisfactory electrical VVT responses were obtained. Target engine control system was designed and fabricated at MSU for both single-cylinder optical and multi-cylinder metal engines. Finally, the developed control-oriented engine model was successfully implemented into the HIL simulation environment. The Chrysler 2.0L I4 DI engine was modified to fit the two-step vale with electrical variable valve timing actuating system. A used prototype engine was used as the base engine and the cylinder head was modified for the two-step valve with electrical VVT actuating system. Engine validation tests indicated that cylinder #3 has very high blow-by and it cannot be reduced with new pistons and rings. Due to the time constraint, it was decided to convert the four-cylinder engine into a single cylinder engine by blocking both intake and exhaust ports of the unused cylinders. The model-based combustion mode transition control algorithm was developed in the MSU HIL simulation environment and the Simulink based control strategy was implemented into the target engine controller. With both single-cylinder metal engine and control strategy ready, stable HCCI combustion was achived with COV of 2.1% Motoring tests were conducted to validate the actuator transient operations including valve lift, electrical variable valve timing, electronic throttle, multiple spark and injection controls. After the actuator operations were confirmed, 15-cycle smooth combustion mode transition from SI to HCCI combustion was achieved; and fast 8-cycle smooth combustion mode transition followed. With a fast electrical variable valve timing actuator, the number of engine cycles required for mode transition can be reduced down to five. It was also found that the combustion mode transition is sensitive to the charge air and engine coolant temperatures and regulating the corresponding temperatures to the target levels during the combustion mode transition is the key for a smooth combustion mode transition. As a summary, the proposed combustion mode transition strategy using the hybrid combustion mode that starts with the SI combustion and ends with the HCCI combustion was experimentally validated on a metal engine. The proposed model-based control approach made it possible to complete the SI-HCCI combustion mode transition within eight engine cycles utilizing the well controlled hybrid combustion mode. Without intensive control-oriented engine modeling and HIL simulation study of using the hybrid combustion mode during the mode transition, it would be impossible to validate the proposed combustion mode transition strategy in a very short period.« less

  6. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  7. The spinal control of locomotion and step-to-step variability in left-right symmetry from slow to moderate speeds

    PubMed Central

    Dambreville, Charline; Labarre, Audrey; Thibaudier, Yann; Hurteau, Marie-France

    2015-01-01

    When speed changes during locomotion, both temporal and spatial parameters of the pattern must adjust. Moreover, at slow speeds the step-to-step pattern becomes increasingly variable. The objectives of the present study were to assess if the spinal locomotor network adjusts both temporal and spatial parameters from slow to moderate stepping speeds and to determine if it contributes to step-to-step variability in left-right symmetry observed at slow speeds. To determine the role of the spinal locomotor network, the spinal cord of 6 adult cats was transected (spinalized) at low thoracic levels and the cats were trained to recover hindlimb locomotion. Cats were implanted with electrodes to chronically record electromyography (EMG) in several hindlimb muscles. Experiments began once a stable hindlimb locomotor pattern emerged. During experiments, EMG and bilateral video recordings were made during treadmill locomotion from 0.1 to 0.4 m/s in 0.05 m/s increments. Cycle and stance durations significantly decreased with increasing speed, whereas swing duration remained unaffected. Extensor burst duration significantly decreased with increasing speed, whereas sartorius burst duration remained unchanged. Stride length, step length, and the relative distance of the paw at stance offset significantly increased with increasing speed, whereas the relative distance at stance onset and both the temporal and spatial phasing between hindlimbs were unaffected. Both temporal and spatial step-to-step left-right asymmetry decreased with increasing speed. Therefore, the spinal cord is capable of adjusting both temporal and spatial parameters during treadmill locomotion, and it is responsible, at least in part, for the step-to-step variability in left-right symmetry observed at slow speeds. PMID:26084910

  8. Aerobic Steps As Measured by Pedometry and Their Relation to Central Obesity

    PubMed Central

    DUCHEČKOVÁ, Petra; FOREJT, Martin

    2014-01-01

    Abstract Background The purpose of this study was to examine the relation between daily steps and aerobic steps, and anthropometric variables, using the waist-to-hip ratio (WHR) and waist-to-height ratio (WHtR). Methods The participants in this cross-sectional study were taken the measurements of by a trained anthropologist and then instructed to wear an Omron pedometer for seven consecutive days. A series of statistical tests (Mann-Whitney U test, Kruskal-Wallis ANOVA, multiple comparisons of z’ values and contingency tables) was performed in order to assess the relation between daily steps and aerobic steps, and anthropometric variables. Results A total of 507 individuals (380 females and 127 males) participated in the study. The average daily number of steps and aerobic steps was significantly lower in the individuals with risky WHR and WHtR as compared to the individuals with normal WHR (P=0.005) and WHtR (P=0.000). A comparison of age and anthropometric variables across aerobic steps activity categories was statistically significant for all the studied parameters. According to the contingency tables for normal steps, there is a 5.75x higher risk in the low-activity category of having WHtR>0.50 as compared to the high-activity category. Conclusions Both normal and aerobic steps are significantly associated with central obesity and other body composition variables. This result is important for older people, who are more likely to perform low-intensity activities rather than moderate- or high-intensity activities. Our results also indicate that risk of having WHtR>0.50 can be reduced by almost 6x by increasing daily steps over 8985 steps per day. PMID:25927036

  9. A Taxonomy of Instructional Strategies in Early Childhood Education; Toward a Developmental Theory of Instructional Design.

    ERIC Educational Resources Information Center

    Vance, Barbara

    This paper suggests two steps in instructional deisgn for early childhood that can be derived from a recent major paper on instructional strategy taxonomy. These steps, together with the instructional design variables involved in each step, are reviewed relative to current research in child development and early education. The variables reviewed…

  10. An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.; Barnes, D. C.

    2011-08-01

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Ampére (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant-Friedrichs-Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicit time steps (unlike the earlier "energy-conserving" explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.

  11. Exact charge and energy conservation in implicit PIC with mapped computational meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Barnes, D. C.

    This paper discusses a novel fully implicit formulation for a one-dimensional electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov Poisson formulation), ours is based on a nonlinearly converged Vlasov Amp re (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (Courant Friedrichs Lewy) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical round-off for arbitrary implicitmore » time steps (unlike the earlier energy-conserving explicit PIC formulation, which only conserves energy in the limit of arbitrarily small time steps). While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling (a deleterious effect for long-term accuracy). The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.« less

  12. A hazard rate analysis of fertility using duration data from Malaysia.

    PubMed

    Chang, C

    1988-01-01

    Data from the Malaysia Fertility and Family Planning Survey (MFLS) of 1974 were used to investigate the effects of biological and socioeconomic variables on fertility based on the hazard rate model. Another study objective was to investigate the robustness of the findings of Trussell et al. (1985) by comparing the findings of this study with theirs. The hazard rate of conception for the jth fecundable spell of the ith woman, hij, is determined by duration dependence, tij, measured by the waiting time to conception; unmeasured heterogeneity (HETi; the time-invariant variables, Yi (race, cohort, education, age at marriage); and time-varying variables, Xij (age, parity, opportunity cost, income, child mortality, child sex composition). In this study, all the time-varying variables were constant over a spell. An asymptotic X2 test for the equality of constant hazard rates across birth orders, allowing time-invariant variables and heterogeneity, showed the importance of time-varying variables and duration dependence. Under the assumption of fixed effects heterogeneity and the Weibull distribution for the duration of waiting time to conception, the empirical results revealed a negative parity effect, a negative impact from male children, and a positive effect from child mortality on the hazard rate of conception. The estimates of step functions for the hazard rate of conception showed parity-dependent fertility control, evidence of heterogeneity, and the possibility of nonmonotonic duration dependence. In a hazard rate model with piecewise-linear-segment duration dependence, the socioeconomic variables such as cohort, child mortality, income, and race had significant effects, after controlling for the length of the preceding birth. The duration dependence was consistant with the common finding, i.e., first increasing and then decreasing at a slow rate. The effects of education and opportunity cost on fertility were insignificant.

  13. SAMPLING OSCILLOSCOPE

    DOEpatents

    Sugarman, R.M.

    1960-08-30

    An oscilloscope is designed for displaying transient signal waveforms having random time and amplitude distributions. The oscilloscopc is a sampling device that selects for display a portion of only those waveforms having a particular range of amplitudes. For this purpose a pulse-height analyzer is provided to screen the pulses. A variable voltage-level shifter and a time-scale rampvoltage generator take the pulse height relative to the start of the waveform. The variable voltage shifter produces a voltage level raised one step for each sequential signal waveform to be sampled and this results in an unsmeared record of input signal waveforms. Appropriate delay devices permit each sample waveform to pass its peak amplitude before the circuit selects it for display.

  14. Numerical solution of the wave equation with variable wave speed on nonconforming domains by high-order difference potentials

    NASA Astrophysics Data System (ADS)

    Britt, S.; Tsynkov, S.; Turkel, E.

    2018-02-01

    We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.

  15. Development of an efficient computer code to solve the time-dependent Navier-Stokes equations. [for predicting viscous flow fields about lifting bodies

    NASA Technical Reports Server (NTRS)

    Harp, J. L., Jr.; Oatway, T. P.

    1975-01-01

    A research effort was conducted with the goal of reducing computer time of a Navier Stokes Computer Code for prediction of viscous flow fields about lifting bodies. A two-dimensional, time-dependent, laminar, transonic computer code (STOKES) was modified to incorporate a non-uniform timestep procedure. The non-uniform time-step requires updating of a zone only as often as required by its own stability criteria or that of its immediate neighbors. In the uniform timestep scheme each zone is updated as often as required by the least stable zone of the finite difference mesh. Because of less frequent update of program variables it was expected that the nonuniform timestep would result in a reduction of execution time by a factor of five to ten. Available funding was exhausted prior to successful demonstration of the benefits to be derived from the non-uniform time-step method.

  16. An efficient and robust algorithm for two dimensional time dependent incompressible Navier-Stokes equations: High Reynolds number flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1991-01-01

    An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.

  17. Advanced spectral methods for climatic time series

    USGS Publications Warehouse

    Ghil, M.; Allen, M.R.; Dettinger, M.D.; Ide, K.; Kondrashov, D.; Mann, M.E.; Robertson, A.W.; Saunders, A.; Tian, Y.; Varadi, F.; Yiou, P.

    2002-01-01

    The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this review we describe the connections between time series analysis and nonlinear dynamics, discuss signal- to-noise enhancement, and present some of the novel methods for spectral analysis. The various steps, as well as the advantages and disadvantages of these methods, are illustrated by their application to an important climatic time series, the Southern Oscillation Index. This index captures major features of interannual climate variability and is used extensively in its prediction. Regional and global sea surface temperature data sets are used to illustrate multivariate spectral methods. Open questions and further prospects conclude the review.

  18. Industrial implementation of spatial variability control by real-time SPC

    NASA Astrophysics Data System (ADS)

    Roule, O.; Pasqualini, F.; Borde, M.

    2016-10-01

    Advanced technology nodes require more and more information to get the wafer process well setup. The critical dimension of components decreases following Moore's law. At the same time, the intra-wafer dispersion linked to the spatial non-uniformity of tool's processes is not capable to decrease in the same proportions. APC systems (Advanced Process Control) are being developed in waferfab to automatically adjust and tune wafer processing, based on a lot of process context information. It can generate and monitor complex intrawafer process profile corrections between different process steps. It leads us to put under control the spatial variability, in real time by our SPC system (Statistical Process Control). This paper will outline the architecture of an integrated process control system for shape monitoring in 3D, implemented in waferfab.

  19. Navier-Stokes solution on the CYBER-203 by a pseudospectral technique

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J.; Hussaini, M. Y.; Bokhari, S.; Orszag, S. A.

    1983-01-01

    A three-level, time-split, mixed spectral/finite difference method for the numerical solution of the three-dimensional, compressible Navier-Stokes equations has been developed and implemented on the Control Data Corporation (CDC) CYBER-203. This method uses a spectral representation for the flow variables in the streamwise and spanwise coordinates, and central differences in the normal direction. The five dependent variables are interleaved one horizontal plane at a time and the array of their values at the grid points of each horizontal plane is a typical vector in the computation. The code is organized so as to require, per time step, a single forward-backward pass through the entire data base. The one-and two-dimensional Fast Fourier Transforms are performed using software especially developed for the CYBER-203.

  20. Coupling Osmolarity Dynamics within Human Tear Film on an Eye-Shaped Domain

    NASA Astrophysics Data System (ADS)

    Li, Longfei; Braun, R. J.; Driscoll, T. A.; Henshaw, W. D.; Banks, J. W.; King-Smith, P. E.

    2013-11-01

    The concentration of ions in the tear film (osmolarity) is a key variable in understanding dry eye symptoms and disease. We derived a mathematical model that couples osmolarity (treated as a single solute) and fluid dynamics within the tear film on a 2D eye-shaped domain. The model concerns the physical effects of evaporation, surface tension, viscosity, ocular surface wettability, osmolarity, osmosis and tear fluid supply and drainage. We solved the governing system of coupled nonlinear PDEs using the Overture computational framework developed at LLNL, together with a new hybrid time stepping scheme (using variable step BDF and RKC) that was added to the framework. Results of our numerical simulations show good agreement with existing 1D models (for both tear film and osmolarity dynamics) and provide new insight about the osmolarity distribution over the ocular surface during the interblink.

  1. Variable aperture-based ptychographical iterative engine method.

    PubMed

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  2. Use of distributed water level and soil moisture data in the evaluation of the PUMMA periurban distributed hydrological model: application to the Mercier catchment, France

    NASA Astrophysics Data System (ADS)

    Braud, Isabelle; Fuamba, Musandji; Branger, Flora; Batchabani, Essoyéké; Sanzana, Pedro; Sarrazin, Benoit; Jankowfsky, Sonja

    2016-04-01

    Distributed hydrological models are used at best when their outputs are compared not only to the outlet discharge, but also to internal observed variables, so that they can be used as powerful hypothesis-testing tools. In this paper, the interest of distributed networks of sensors for evaluating a distributed model and the underlying functioning hypotheses is explored. Two types of data are used: surface soil moisture and water level in streams. The model used in the study is the periurban PUMMA (Peri-Urban Model for landscape Management, Jankowfsky et al., 2014), that is applied to the Mercier catchment (6.7 km2) a semi-rural catchment with 14% imperviousness, located close to Lyon, France where distributed water level (13 locations) and surface soil moisture data (9 locations) are available. Model parameters are specified using in situ information or the results of previous studies, without any calibration and the model is run for four years from January 1st 2007 to December 31st 2010 with a variable time step for rainfall and an hourly time step for reference evapotranspiration. The model evaluation protocol was guided by the available data and how they can be interpreted in terms of hydrological processes and constraints for the model components and parameters. We followed a stepwise approach. The first step was a simple model water balance assessment, without comparison to observed data. It can be interpreted as a basic quality check for the model, ensuring that it conserves mass, makes the difference between dry and wet years, and reacts to rainfall events. The second step was an evaluation against observed discharge data at the outlet, using classical performance criteria. It gives a general picture of the model performance and allows to comparing it to other studies found in the literature. In the next steps (steps 3 to 6), focus was made on more specific hydrological processes. In step 3, distributed surface soil moisture data was used to assess the relevance of the simulated seasonal soil water storage dynamics. In step 4, we evaluated the base flow generation mechanisms in the model through comparison with continuous water level data transformed into stream intermittency statistics. In step 5, the water level data was used again but at the event time scale, to evaluate the fast flow generation components through comparison of modelled and observed reaction and response times. Finally, in step 6, we studied correlation between observed and simulated reaction and response times and various characteristics of the rainfall events (rain volume, intensity) and antecedent soil moisture, to see if the model was able to reproduce the observed features as described in Sarrazin (2012). The results show that the model is able to represent satisfactorily the soil water storage dynamics and stream intermittency. On the other hand, the model does not reproduce the response times and the difference in response between forested and agricultural areas. References: Jankowfsky et al., 2014. Assessing anthropogenic influence on the hydrology of small peri-urban catchments: Development of the object-oriented PUMMA model by integrating urban and rural hydrological models. J. Hydrol., 517, 1056-1071 Sarrazin, B., 2012. MNT et observations multi-locales du réseau hydrographique d'un petit bassin versant rural dans une perspective d'aide à la modélisation hydrologique. Ecole doctorale Terre, Univers, Environnement. l'Institut National Polytechnique de Grenoble, 269 pp (in French).

  3. Comparison of IMRT planning with two-step and one-step optimization: a strategy for improving therapeutic gain and reducing the integral dose

    NASA Astrophysics Data System (ADS)

    Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.

    2009-12-01

    The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.

  4. Origami Wheel Transformer: A Variable-Diameter Wheel Drive Robot Using an Origami Structure.

    PubMed

    Lee, Dae-Young; Kim, Sa-Reum; Kim, Ji-Suk; Park, Jae-Jun; Cho, Kyu-Jin

    2017-06-01

    A wheel drive mechanism is simple, stable, and efficient, but its mobility in unstructured terrain is seriously limited. Using a deformable wheel is one of the ways to increase the mobility of a wheel drive robot. By changing the radius of its wheels, the robot becomes able to pass over not only high steps but also narrow gaps. In this article, we propose a novel design for a variable-diameter wheel using an origami-based soft robotics design approach. By simply folding a patterned sheet into a wheel shape, a variable-diameter wheel was built without requiring lots of mechanical parts and a complex assembly process. The wheel's diameter can change from 30 to 68 mm, and it is light in weight at about 9.7 g. Although composed of soft materials (fabrics and films), the wheel can bear more than 400 times its weight. The robot was able to change the wheel's radius in response to terrain conditions, allowing it to pass over a 50-mm gap when the wheel is shrunk and a 50-mm step when the wheel is enlarged.

  5. Breaking the trade-off between efficiency and service.

    PubMed

    Frei, Frances X

    2006-11-01

    For manufacturers, customers are the open wallets at the end of the supply chain. But for most service businesses, they are key inputs to the production process. Customers introduce tremendous variability to that process, but they also complain about any lack of consistency and don't care about the company's profit agenda. Managing customer-introduced variability, the author argues, is a central challenge for service companies. The first step is to diagnose which type of variability is causing mischief: Customers may arrive at different times, request different kinds of service, possess different capabilities, make varying degrees of effort, and have different personal preferences. Should companies accommodate variability or reduce it? Accommodation often involves asking employees to compensate for the variations among customers--a potentially costly solution. Reduction often means offering a limited menu of options, which may drive customers away. Some companies have learned to deal with customer-introduced variability without damaging either their operating environments or customers' service experiences. Starbucks, for example, handles capability variability among its customers by teaching them the correct ordering protocol. Dell deals with arrival and request variability in its high-end server business by outsourcing customer service while staying in close touch with customers to discuss their needs and assess their experiences with third-party providers. The effective management of variability often requires a company to influence customers' behavior. Managers attempting that kind of intervention can follow a three-step process: diagnosing the behavioral problem, designing an operating role for customers that creates new value for both parties, and testing and refining approaches for influencing behavior.

  6. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  7. The Validity and Reliability of an iPhone App for Measuring Running Mechanics.

    PubMed

    Balsalobre-Fernández, Carlos; Agopyan, Hovannes; Morin, Jean-Benoit

    2017-07-01

    The purpose of this investigation was to analyze the validity of an iPhone application (Runmatic) for measuring running mechanics. To do this, 96 steps from 12 different runs at speeds ranging from 2.77-5.55 m·s -1 were recorded simultaneously with Runmatic, as well as with an opto-electronic device installed on a motorized treadmill to measure the contact and aerial time of each step. Additionally, several running mechanics variables were calculated using the contact and aerial times measured, and previously validated equations. Several statistics were computed to test the validity and reliability of Runmatic in comparison with the opto-electronic device for the measurement of contact time, aerial time, vertical oscillation, leg stiffness, maximum relative force, and step frequency. The running mechanics values obtained with both the app and the opto-electronic device showed a high degree of correlation (r = .94-.99, p < .001). Moreover, there was very close agreement between instruments as revealed by the ICC (2,1) (ICC = 0.965-0.991). Finally, both Runmatic and the opto-electronic device showed almost identical reliability levels when measuring each set of 8 steps for every run recorded. In conclusion, Runmatic has been proven to be a highly reliable tool for measuring the running mechanics studied in this work.

  8. Experimental study on the stability and failure of individual step-pool

    NASA Astrophysics Data System (ADS)

    Zhang, Chendi; Xu, Mengzhen; Hassan, Marwan A.; Chartrand, Shawn M.; Wang, Zhaoyin

    2018-06-01

    Step-pools are one of the most common bedforms in mountain streams, the stability and failure of which play a significant role for riverbed stability and fluvial processes. Given this importance, flume experiments were performed with a manually constructed step-pool model. The experiments were carried out with a constant flow rate to study features of step-pool stability as well as failure mechanisms. The results demonstrate that motion of the keystone grain (KS) caused 90% of the total failure events. The pool reached its maximum depth and either exhibited relative stability for a period before step failure, which was called the stable phase, or the pool collapsed before its full development. The critical scour depth for the pool increased linearly with discharge until the trend was interrupted by step failure. Variability of the stable phase duration ranged by one order of magnitude, whereas variability of pool scour depth was constrained within 50%. Step adjustment was detected in almost all of the runs with step-pool failure and was one or two orders smaller than the diameter of the step stones. Two discharge regimes for step-pool failure were revealed: one regime captures threshold conditions and frames possible step-pool failure, whereas the second regime captures step-pool failure conditions and is the discharge of an exceptional event. In the transitional stage between the two discharge regimes, pool and step adjustment magnitude displayed relatively large variabilities, which resulted in feedbacks that extended the duration of step-pool stability. Step adjustment, which was a type of structural deformation, increased significantly before step failure. As a result, we consider step deformation as the direct explanation to step-pool failure rather than pool scour, which displayed relative stability during step deformations in our experiments.

  9. Langevin dynamics for vector variables driven by multiplicative white noise: A functional formalism

    NASA Astrophysics Data System (ADS)

    Moreno, Miguel Vera; Arenas, Zochil González; Barci, Daniel G.

    2015-04-01

    We discuss general multidimensional stochastic processes driven by a system of Langevin equations with multiplicative white noise. In particular, we address the problem of how time reversal diffusion processes are affected by the variety of conventions available to deal with stochastic integrals. We present a functional formalism to build up the generating functional of correlation functions without any type of discretization of the Langevin equations at any intermediate step. The generating functional is characterized by a functional integration over two sets of commuting variables, as well as Grassmann variables. In this representation, time reversal transformation became a linear transformation in the extended variables, simplifying in this way the complexity introduced by the mixture of prescriptions and the associated calculus rules. The stochastic calculus is codified in our formalism in the structure of the Grassmann algebra. We study some examples such as higher order derivative Langevin equations and the functional representation of the micromagnetic stochastic Landau-Lifshitz-Gilbert equation.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumuluru, Jaya Shankar; McCulloch, Richard Chet James

    In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less

  11. Analysis of the spatial and temporal variability of mountain snowpack and terrestrial water storage in the Upper Snake River, USA

    EPA Science Inventory

    The spatial and temporal relationships of winter snowpack and terrestrial water storage (TWS) in the Upper Snake River were analyzed for water years 2001–2010 at a monthly time step. We coupled a regionally validated snow model with gravimetric measurements of the Earth’s water...

  12. Assimilating a synthetic Kalman filter leaf area index series into the WOFOST model to improve regional winter wheat yield estimation

    USDA-ARS?s Scientific Manuscript database

    The scale mismatch between remotely sensed observations and crop growth models simulated state variables decreases the reliability of crop yield estimates. To overcome this problem, we used a two-step data assimilation phases: first we generated a complete leaf area index (LAI) time series by combin...

  13. A Time Integration Algorithm Based on the State Transition Matrix for Structures with Time Varying and Nonlinear Properties

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2003-01-01

    A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.

  14. A real-time and closed-loop control algorithm for cascaded multilevel inverter based on artificial neural network.

    PubMed

    Wang, Libing; Mao, Chengxiong; Wang, Dan; Lu, Jiming; Zhang, Junfeng; Chen, Xun

    2014-01-01

    In order to control the cascaded H-bridges (CHB) converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN) for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC) algorithm is employed to minimize the total harmonic distortion (THD) and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC) sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current's THD (<5%) when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness.

  15. Gait characteristics under different walking conditions: Association with the presence of cognitive impairment in community-dwelling older people

    PubMed Central

    Fransen, Erik; Perkisas, Stany; Verhoeven, Veronique; Beauchet, Olivier; Remmen, Roy

    2017-01-01

    Background Gait characteristics measured at usual pace may allow profiling in patients with cognitive problems. The influence of age, gender, leg length, modified speed or dual tasking is unclear. Methods Cross-sectional analysis was performed on a data registry containing demographic, physical and spatial-temporal gait parameters recorded in five walking conditions with a GAITRite® electronic carpet in community-dwelling older persons with memory complaints. Four cognitive stages were studied: cognitively healthy individuals, mild cognitive impaired patients, mild dementia patients and advanced dementia patients. Results The association between spatial-temporal gait characteristics and cognitive stages was the most prominent: in the entire study population using gait speed, steps per meter (translation for mean step length), swing time variability, normalised gait speed (corrected for leg length) and normalised steps per meter at all five walking conditions; in the 50-to-70 years old participants applying step width at fast pace and steps per meter at usual pace; in the 70-to-80 years old persons using gait speed and normalised gait speed at usual pace, fast pace, animal walk and counting walk or steps per meter and normalised steps per meter at all five walking conditions; in over-80 years old participants using gait speed, normalised gait speed, steps per meter and normalised steps per meter at fast pace and animal dual-task walking. Multivariable logistic regression analysis adjusted for gender predicted in two compiled models the presence of dementia or cognitive impairment with acceptable accuracy in persons with memory complaints. Conclusion Gait parameters in multiple walking conditions adjusted for age, gender and leg length showed a significant association with cognitive impairment. This study suggested that multifactorial gait analysis could be more informative than using gait analysis with only one test or one variable. Using this type of gait analysis in clinical practice could facilitate screening for cognitive impairment. PMID:28570662

  16. Statistical assessment of DNA extraction reagent lot variability in real-time quantitative PCR

    USGS Publications Warehouse

    Bushon, R.N.; Kephart, C.M.; Koltun, G.F.; Francy, D.S.; Schaefer, F. W.; Lindquist, H.D. Alan

    2010-01-01

    Aims: The aim of this study was to evaluate the variability in lots of a DNA extraction kit using real-time PCR assays for Bacillus anthracis, Francisella tularensis and Vibrio cholerae. Methods and Results: Replicate aliquots of three bacteria were processed in duplicate with three different lots of a commercial DNA extraction kit. This experiment was repeated in triplicate. Results showed that cycle threshold values were statistically different among the different lots. Conclusions: Differences in DNA extraction reagent lots were found to be a significant source of variability for qPCR results. Steps should be taken to ensure the quality and consistency of reagents. Minimally, we propose that standard curves should be constructed for each new lot of extraction reagents, so that lot-to-lot variation is accounted for in data interpretation. Significance and Impact of the Study: This study highlights the importance of evaluating variability in DNA extraction procedures, especially when different reagent lots are used. Consideration of this variability in data interpretation should be an integral part of studies investigating environmental samples with unknown concentrations of organisms. ?? 2010 The Society for Applied Microbiology.

  17. The effect of cane use on the compensatory step following posterior perturbations.

    PubMed

    Hall, Courtney D; Jensen, Jody L

    2004-08-01

    The compensatory step is a critical component of the balance response and is impaired in older fallers. The purpose of this research was to examine whether utilization of a cane modified the compensatory step response following external posterior perturbations. Single subject withdrawal design was employed. Single subject statistical analysis--the standard deviation bandwidth-method--supplemented visual analysis of the data. Four older adults (range: 73-83 years) with balance impairment who habitually use a cane completed this study. Subjects received a series of sudden backward pulls that were large enough to elicit compensatory stepping. We examined the following variables both with and without cane use: timing of cane loading relative to step initiation and center of mass acceleration, stability margin, center of mass excursion and velocity, step length and width. No participant loaded the cane prior to initiation of the first compensatory step. There was no effect of cane use on the stability margin, nor was there an effect of cane use on center of mass excursion or velocity, or step length or width. These data suggest that cane use does not necessarily improve balance recovery following an external posterior perturbation when the individual is forced to rely on compensatory stepping. Instead these data suggest that the strongest factor in modifying step characteristics is experience with the perturbation.

  18. Meteorological factors for PM10 concentration levels in Northern Spain

    NASA Astrophysics Data System (ADS)

    Santurtún, Ana; Mínguez, Roberto; Villar-Fernández, Alejandro; González Hidalgo, Juan Carlos; Zarrabeitia, María Teresa

    2013-04-01

    Atmospheric particulate matter (PM) is made up of a mixture of solid and aqueous species which enter the atmosphere by anthropogenic and natural pathways. The levels and composition of ambient air PM depend on the climatology and on the geography (topography, soil cover, proximity to arid zones or to the coast) of a given region. Spain has particular difficulties in achieving compliance with the limit values established by the European Union (based on recommendations from the World Health Organization) for particulate matter on the order of 10 micrometers of diameter or less (PM10), but not only antropogenical emissions are responsible for this: some studies show that PM10 concentrations originating from these kinds of sources are similar to what is found in other European countries, while some of the geographical features of the Iberian Peninsula (such as African mineral dust intrusion, soil aridity or rainfall) are proven to be a factor for higher PM concentrations. This work aims to describe PM10 concentration levels in Cantabria (Northern Spain) and their relationship with the following meteorological variables: rainfall, solar radiation, temperature, barometric pressure and wind speed. Data consists of daily series obtained from hourly data records for the 2000-2010 period, of PM10 concentrations from 4 different urban-background stations, and daily series of the meteorological variables provided by Spanish National Meteorology Agency. The method used for establishing the relationships between these variables consists of several steps: i) fitting a non-stationary probability density function for each variable accounting for long-term trends, seasonality during the year and possible seasonality during the week to distinguish between work and weekend days, ii) using the marginal distribution function obtained, transform the time series of historical values of each variable into a normalized Gaussian time series. This step allows using consistently time series models, iii) fitting of a times series model (Autoregressive moving average, ARMA) to the transformed historical values in order to eliminate the temporal autocorrelation structure of each stochastic process, obtaining a white noise for each variable, and finally, iv) the calculation of cross correlations between white noises at different time lags. These cross correlations allow characterization of the true correlation between signals, avoiding the problems induced by data scaling or autocorrelations inherent to each signal. Results provide the relationship and possible contribution to PM10 concentration levels associated with each meteorological variable. This information can be used to improve PM10 concentration levels forecasting using existing meteorological forecasts.

  19. Timing variability of reach trajectories in left versus right hemisphere stroke.

    PubMed

    Freitas, Sandra Maria Sbeghen Ferreira; Gera, Geetanjali; Scholz, John Peter

    2011-10-24

    This study investigated trajectory timing variability in right and left stroke survivors and healthy controls when reaching to a centrally located target under a fixed target condition or when the target could suddenly change position after reach onset. Trajectory timing variability was investigated with a novel method based on dynamic programming that identifies the steps required to time warp one trial's acceleration time series to match that of a reference trial. Greater trajectory timing variability of both hand and joint motions was found for the paretic arm of stroke survivors compared to their non-paretic arm or either arm of controls. Overall, the non-paretic left arm of the LCVA group and the left arm of controls had higher timing variability than the non-paretic right arm of the RCVA group and right arm of controls. The shoulder and elbow joint warping costs were consistent predictors of the hand's warping cost for both left and right arms only in the LCVA group, whereas the relationship between joint and hand warping costs was relatively weak in control subjects and less consistent across arms in the RCVA group. These results suggest that the left hemisphere may be more involved in trajectory timing, although the results may be confounded by skill differences between the arms in these right hand dominant participants. On the other hand, arm differences did not appear to be related to differences in targeting error. The paretic left arm of the RCVA exhibited greater trajectory timing variability than the paretic right arm of the LCVA group. This difference was highly correlated with the level of impairment of the arms. Generally, the effect of target uncertainty resulted in slightly greater trajectory timing variability for all participants. The results are discussed in light of previous studies of hemispheric differences in the control of reaching, in particular, left hemisphere specialization for temporal control of reaching movements. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. TIMING VARIABILITY OF REACH TRAJECTORIES IN LEFT VERSUS RIGHT HEMISPHERE STROKE

    PubMed Central

    Freitas, Sandra Maria Sbeghen Ferreira; Gera, Geetanjali; Scholz, John Peter

    2011-01-01

    This study investigated trajectory timing variability in right and left stroke survivors and healthy controls when reaching to a centrally located target under a fixed target condition or when the target could suddenly change position after reach onset. Trajectory timing variability was investigated with a novel method based on dynamic programming that identifies the steps required to time warp one trial’s acceleration time series to match that of a reference trial. Greater trajectory timing variability of both hand and joint motions was found for the paretic arm of stroke survivors compared to their non-paretic arm or either arm of controls. Overall, the non-paretic left arm of the LCVA group and the left arm of controls had higher timing variability than the non-paretic right arm of the RCVA group and right arm of controls. The shoulder and elbow joint warping costs were consistent predictors of the hand’s warping cost for both left and right arms only in the LCVA group, whereas the relationship between joint and hand warping costs was relatively weak in control subjects and less consistent across arms in the RCVA group. These results suggest that the left hemisphere may be more involved in trajectory timing, although the results may be confounded by skill differences between the arms in these right hand dominant participants. On the other hand, arm differences did not appear to be related to differences in targeting error. The paretic left arm of the RCVA exhibited greater trajectory timing variability than the paretic right arm of the LCVA group. This difference was highly correlated with the level of impairment of the arms. Generally, the effect of target uncertainty resulted in slightly greater trajectory timing variability for all participants. The results are discussed in light of previous studies of hemispheric differences in the control of reaching, in particular, left hemisphere specialization for temporal control of reaching movements. PMID:21920508

  1. Defining process design space for a hydrophobic interaction chromatography (HIC) purification step: application of quality by design (QbD) principles.

    PubMed

    Jiang, Canping; Flansburg, Lisa; Ghose, Sanchayita; Jorjorian, Paul; Shukla, Abhinav A

    2010-12-15

    The concept of design space has been taking root under the quality by design paradigm as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. This paper outlines the development of a design space for a hydrophobic interaction chromatography (HIC) process step. The design space included the impact of raw material lot-to-lot variability and variations in the feed stream from cell culture. A failure modes and effects analysis was employed as the basis for the process characterization exercise. During mapping of the process design space, the multi-dimensional combination of operational variables were studied to quantify the impact on process performance in terms of yield and product quality. Variability in resin hydrophobicity was found to have a significant influence on step yield and high-molecular weight aggregate clearance through the HIC step. A robust operating window was identified for this process step that enabled a higher step yield while ensuring acceptable product quality. © 2010 Wiley Periodicals, Inc.

  2. Effects of 16-week spinning and bicycle exercise on body composition, physical fitness and blood variables of middle school students

    PubMed Central

    Yoon, Jang-Gun; Kim, Seok-Hee; Rhyu, Hyun-Seung

    2017-01-01

    The purpose of this study was to investigate the effects of 16 weeks of spinning and bicycling exercises on body composition, physical fitness and blood variables in female adolescents. Subjects participated in this study were 24 female middle school students (12 spinning cycles, 12 general bicycles) attending to Seoul Yeoksam middle school. Each group was trained for 16 weeks, 3 times a week, and 1 hr per day after school. Body composition, physical fitness (1,200 running, sit-ups, back strength, sit and reach, side-steps) and blood variables (low-density lipoprotein cholesterol, glucose, reactive oxygen species, and malondialdehyde) were examined before and after 16 weeks of training. As the results, body weight did not show any significant difference; however, body mass index, and % body fat were significantly difference in spinning group. The enhancement in physical fitness factors were recognized in both groups, which was greater in spinning group in sit-ups, back strength, and side steps. Blood parameters were significantly difference between groups, but between group and time interactions were significantly difference in glucose and reactive oxygen species. In conclusion, this study suggests that 16 weeks of bicycle exercises were positive changes in body composition, physical fitness and blood constituents, indicating that spinning cycle is more beneficial as compared to ordinary bicycle. PMID:29114504

  3. Interface induced spin-orbit interaction in silicon quantum dots and prospects of scalability

    NASA Astrophysics Data System (ADS)

    Ferdous, Rifat; Wai, Kok; Veldhorst, Menno; Hwang, Jason; Yang, Henry; Klimeck, Gerhard; Dzurak, Andrew; Rahman, Rajib

    A scalable quantum computing architecture requires reproducibility over key qubit properties, like resonance frequency, coherence time etc. Randomness in these properties would necessitate individual knowledge of each qubit in a quantum computer. Spin qubits hosted in Silicon (Si) quantum dots (QD) is promising as a potential building block for a large-scale quantum computer, because of their longer coherence times. The Stark shift of the electron g-factor in these QDs has been used to selectively address multiple qubits. From atomistic tight-binding studies we investigated the effect of interface non-ideality on the Stark shift of the g-factor in a Si QD. We find that based on the location of a monoatomic step at the interface with respect to the dot center both the sign and magnitude of the Stark shift change. Thus the presence of interface steps in these devices will cause variability in electron g-factor and its Stark shift based on the location of the qubit. This behavior will also cause varying sensitivity to charge noise from one qubit to another, which will randomize the dephasing times T2*. This predicted device-to-device variability is experimentally observed recently in three qubits fabricated at a Si/Si02 interface, which validates the issues discussed.

  4. Effects of Imperfect Dynamic Clamp: Computational and Experimental Results

    PubMed Central

    Bettencourt, Jonathan C.; Lillis, Kyle P.; White, John A.

    2008-01-01

    In the dynamic clamp technique, a typically nonlinear feedback system delivers electrical current to an excitable cell that represents the actions of “virtual” ion channels (e.g., channels that are gated by local membrane potential or by electrical activity in neighboring biological or virtual neurons). Since the conception of this technique, there have been a number of different implementations of dynamic clamp systems, each with differing levels of flexibility and performance. Embedded hardware-based systems typically offer feedback that is very fast and precisely timed, but these systems are often expensive and sometimes inflexible. PC-based systems, on the other hand, allow the user to write software that defines an arbitrarily complex feedback system, but real-time performance in PC-based systems can be deteriorated by imperfect real-time performance. Here we systematically evaluate the performance requirements for artificial dynamic clamp knock-in of transient sodium and delayed rectifier potassium conductances. Specifically we examine the effects of controller time step duration, differential equation integration method, jitter (variability in time step), and latency (the time lag from reading inputs to updating outputs). Each of these control system flaws is artificially introduced in both simulated and real dynamic clamp experiments. We demonstrate that each of these errors affect dynamic clamp accuracy in a way that depends on the time constants and stiffness of the differential equations being solved. In simulations, time steps above 0.2 ms lead to catastrophic alteration of spike shape, but the frequency-vs.-current relationship is much more robust. Latency (the part of the time step that occurs between measuring membrane potential and injecting re-calculated membrane current) is a crucial factor as well. Experimental data are substantially more sensitive to inaccuracies than simulated data. PMID:18076999

  5. Modeling temporal and large-scale spatial variability of soil respiration from soil water availability, temperature and vegetation productivity indices

    NASA Astrophysics Data System (ADS)

    Reichstein, Markus; Rey, Ana; Freibauer, Annette; Tenhunen, John; Valentini, Riccardo; Banza, Joao; Casals, Pere; Cheng, Yufu; Grünzweig, Jose M.; Irvine, James; Joffre, Richard; Law, Beverly E.; Loustau, Denis; Miglietta, Franco; Oechel, Walter; Ourcival, Jean-Marc; Pereira, Joao S.; Peressotti, Alessandro; Ponti, Francesca; Qi, Ye; Rambal, Serge; Rayment, Mark; Romanya, Joan; Rossi, Federica; Tedeschi, Vanessa; Tirone, Giampiero; Xu, Ming; Yakir, Dan

    2003-12-01

    Field-chamber measurements of soil respiration from 17 different forest and shrubland sites in Europe and North America were summarized and analyzed with the goal to develop a model describing seasonal, interannual and spatial variability of soil respiration as affected by water availability, temperature, and site properties. The analysis was performed at a daily and at a monthly time step. With the daily time step, the relative soil water content in the upper soil layer expressed as a fraction of field capacity was a good predictor of soil respiration at all sites. Among the site variables tested, those related to site productivity (e.g., leaf area index) correlated significantly with soil respiration, while carbon pool variables like standing biomass or the litter and soil carbon stocks did not show a clear relationship with soil respiration. Furthermore, it was evidenced that the effect of precipitation on soil respiration stretched beyond its direct effect via soil moisture. A general statistical nonlinear regression model was developed to describe soil respiration as dependent on soil temperature, soil water content, and site-specific maximum leaf area index. The model explained nearly two thirds of the temporal and intersite variability of soil respiration with a mean absolute error of 0.82 μmol m-2 s-1. The parameterized model exhibits the following principal properties: (1) At a relative amount of upper-layer soil water of 16% of field capacity, half-maximal soil respiration rates are reached. (2) The apparent temperature sensitivity of soil respiration measured as Q10 varies between 1 and 5 depending on soil temperature and water content. (3) Soil respiration under reference moisture and temperature conditions is linearly related to maximum site leaf area index. At a monthly timescale, we employed the approach by [2002] that used monthly precipitation and air temperature to globally predict soil respiration (T&P model). While this model was able to explain some of the month-to-month variability of soil respiration, it failed to capture the intersite variability, regardless of whether the original or a new optimized model parameterization was used. In both cases, the residuals were strongly related to maximum site leaf area index. Thus, for a monthly timescale, we developed a simple T&P&LAI model that includes leaf area index as an additional predictor of soil respiration. This extended but still simple model performed nearly as well as the more detailed time step model and explained 50% of the overall and 65% of the site-to-site variability. Consequently, better estimates of globally distributed soil respiration should be obtained with the new model driven by satellite estimates of leaf area index. Before application at the continental or global scale, this approach should be further tested in boreal, cold-temperate, and tropical biomes as well as for non-woody vegetation.

  6. A comparative modeling analysis of multiscale temporal variability of rainfall in Australia

    NASA Astrophysics Data System (ADS)

    Samuel, Jos M.; Sivapalan, Murugesu

    2008-07-01

    The effects of long-term natural climate variability and human-induced climate change on rainfall variability have become the focus of much concern and recent research efforts. In this paper, we present the results of a comparative analysis of observed multiscale temporal variability of rainfall in the Perth, Newcastle, and Darwin regions of Australia. This empirical and stochastic modeling analysis explores multiscale rainfall variability, i.e., ranging from short to long term, including within-storm patterns, and intra-annual, interannual, and interdecadal variabilities, using data taken from each of these regions. The analyses investigated how storm durations, interstorm periods, and average storm rainfall intensities differ for different climate states and demonstrated significant differences in this regard between the three selected regions. In Perth, the average storm intensity is stronger during La Niña years than during El Niño years, whereas in Newcastle and Darwin storm duration is longer during La Niña years. Increase of either storm duration or average storm intensity is the cause of higher average annual rainfall during La Niña years as compared to El Niño years. On the other hand, within-storm variability does not differ significantly between different ENSO states in all three locations. In the case of long-term rainfall variability, the statistical analyses indicated that in Newcastle the long-term rainfall pattern reflects the variability of the Interdecadal Pacific Oscillation (IPO) index, whereas in Perth and Darwin the long-term variability exhibits a step change in average annual rainfall (up in Darwin and down in Perth) which occurred around 1970. The step changes in Perth and Darwin and the switch in IPO states in Newcastle manifested differently in the three study regions in terms of changes in the annual number of rainy days or the average daily rainfall intensity or both. On the basis of these empirical data analyses, a stochastic rainfall time series model was developed that incorporates the entire range of multiscale variabilities observed in each region, including within-storm, intra-annual, interannual, and interdecadal variability. Such ability to characterize, model, and synthetically generate realistic time series of rainfall intensities is essential for addressing many hydrological problems, including estimation of flood and drought frequencies, pesticide risk assessment, and landslide frequencies.

  7. Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2016-06-01

    This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.

  8. Understanding the significance variables for fabrication of fish gelatin nanoparticles by Plackett-Burman design

    NASA Astrophysics Data System (ADS)

    Subara, Deni; Jaswir, Irwandi; Alkhatib, Maan Fahmi Rashid; Noorbatcha, Ibrahim Ali

    2018-01-01

    The aim of this experiment is to screen and to understand the process variables on the fabrication of fish gelatin nanoparticles by using quality-design approach. The most influencing process variables were screened by using Plackett-Burman design. Mean particles size, size distribution, and zeta potential were found in the range 240±9.76 nm, 0.3, and -9 mV, respectively. Statistical results explained that concentration of acetone, pH of solution during precipitation step and volume of cross linker had a most significant effect on particles size of fish gelatin nanoparticles. It was found that, time and chemical consuming is lower than previous research. This study revealed the potential of quality-by design in understanding the effects of process variables on the fish gelatin nanoparticles production.

  9. TRUMP. Transient & S-State Temperature Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    1992-03-03

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  11. Predictable 'meta-mechanisms' emerge from feedbacks between transpiration and plant growth and cannot be simply deduced from short-term mechanisms.

    PubMed

    Tardieu, François; Parent, Boris

    2017-06-01

    Growth under water deficit is controlled by short-term mechanisms but, because of numerous feedbacks, the combination of these mechanisms over time often results in outputs that cannot be deduced from the simple inspection of individual mechanisms. It can be analysed with dynamic models in which causal relationships between variables are considered at each time-step, allowing calculation of outputs that are routed back to inputs for the next time-step and that can change the system itself. We first review physiological mechanisms involved in seven feedbacks of transpiration on plant growth, involving changes in tissue hydraulic conductance, stomatal conductance, plant architecture and underlying factors such as hormones or aquaporins. The combination of these mechanisms over time can result in non-straightforward conclusions as shown by examples of simulation outputs: 'over production of abscisic acid (ABA) can cause a lower concentration of ABA in the xylem sap ', 'decreasing root hydraulic conductance when evaporative demand is maximum can improve plant performance' and 'rapid root growth can decrease yield'. Systems of equations simulating feedbacks over numerous time-steps result in logical and reproducible emergent properties that can be viewed as 'meta-mechanisms' at plant level, which have similar roles as mechanisms at cell level. © 2016 John Wiley & Sons Ltd.

  12. Quantum information processing with a travelling wave of light

    NASA Astrophysics Data System (ADS)

    Serikawa, Takahiro; Shiozawa, Yu; Ogawa, Hisashi; Takanashi, Naoto; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira

    2018-02-01

    We exploit quantum information processing on a traveling wave of light, expecting emancipation from thermal noise, easy coupling to fiber communication, and potentially high operation speed. Although optical memories are technically challenging, we have an alternative approach to apply multi-step operations on traveling light, that is, continuous-variable one-way computation. So far our achievement includes generation of a one-million-mode entangled chain in time-domain, mode engineering of nonlinear resource states, and real-time nonlinear feedforward. Although they are implemented with free space optics, we are also investigating photonic integration and performed quantum teleportation with a passive liner waveguide chip as a demonstration of entangling, measurement, and feedforward. We also suggest a loop-based architecture as another model of continuous-variable computing.

  13. A computer software system for the generation of global ocean tides including self-gravitation and crustal loading effects

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1977-01-01

    A computer software system is described which computes global numerical solutions of the integro-differential Laplace tidal equations, including dissipation terms and ocean loading and self-gravitation effects, for arbitrary diurnal and semidiurnal tidal constituents. The integration algorithm features a successive approximation scheme for the integro-differential system, with time stepping forward differences in the time variable and central differences in spatial variables. Solutions for M2, S2, N2, K2, K1, O1, P1 tidal constituents neglecting the effects of ocean loading and self-gravitation and a converged M2, solution including ocean loading and self-gravitation effects are presented in the form of cotidal and corange maps.

  14. Analysis of astronomical data from optical superconducting tunnel junctions

    NASA Astrophysics Data System (ADS)

    de Bruijne, J. H.; Reynolds, A. P.; Perryman, Michael A.; Favata, Fabio; Peacock, Anthony J.

    2002-06-01

    Currently operating optical superconducting tunnel junction (STJ) detectors, developed at the European Space Agency (ESA), can simultaneously measure the wavelength ((Delta) (gamma) equals 50 nm at 500 nm) and arrival time (to within approximately 5 microsecond(s) ) of individual photons in the range 310 to 720 nm with an efficiency of approximately 70%, and with count rates of the order of 5000 photons s-1 per junction. A number of STJs placed in an array format generates 4-D data: photon arrival time, energy, and array element (X,Y). Such STJ cameras are ideally suited for, e.g., high-time-resolution spectrally resolved monitoring of variable sources or low- resolution spectroscopy of faint extragalactic objects. The reduction of STJ data involves detector efficiency correction, atmospheric extinction correction, sky background subtraction, and, unlike that of data from CCD-based systems, a more complex energy calibration, barycentric arrival time correction, energy range selection, and time binning; these steps are, in many respects, analogous to procedures followed in high-energy astrophysics. We discuss these calibration steps in detail using a representative observation of the cataclysmic variable UZ Fornacis; these data were obtained with ESA's S-Cam2 6 X 6-pixel device. We furthermore discuss issues related to telescope pointing and guiding, differential atmospheric refraction, and atmosphere-induced image motion and image smearing (`seeing') in the focal plane. We also present a simple and effective recipe for extracting the evolution of atmospheric seeing with time from any science exposure and discuss a number of caveats in the interpretation of STJ-based time-binned data, such as light curves and hardness ratio plots.

  15. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  16. Student Failures on First-Year Medical Basic Science Courses and the USMLE Step 1: A Retrospective Study over a 20-Year Period

    ERIC Educational Resources Information Center

    Burns, E. Robert; Garrett, Judy

    2015-01-01

    Correlates of achievement in the basic science years in medical school and on the Step 1 of the United States Medical Licensing Examination® (USMLE®), (Step 1) in relation to preadmission variables have been the subject of considerable study. Preadmissions variables such as the undergraduate grade point average (uGPA) and Medical College Admission…

  17. 12-step affiliation and attendance following treatment for comorbid substance dependence and depression: a latent growth curve mediation model.

    PubMed

    Worley, Matthew J; Tate, Susan R; McQuaid, John R; Granholm, Eric L; Brown, Sandra A

    2013-01-01

    Among substance-dependent individuals, comorbid major depressive disorder (MDD) is associated with greater severity and poorer treatment outcomes, but little research has examined mediators of posttreatment substance use outcomes within this population. Using latent growth curve models, the authors tested relationships between individual rates of change in 12-step involvement and substance use, utilizing posttreatment follow-up data from a trial of group Twelve-Step Facilitation (TSF) and integrated cognitive-behavioral therapy (ICBT) for veterans with substance dependence and MDD. Although TSF patients were higher on 12-step affiliation and meeting attendance at end-of-treatment as compared with ICBT, they also experienced significantly greater reductions in these variables during the year following treatment, ending at similar levels as ICBT. Veterans in TSF also had significantly greater increases in drinking frequency during follow-up, and this group difference was mediated by their greater reductions in 12-step affiliation and meeting attendance. Patients with comorbid depression appear to have difficulty sustaining high levels of 12-step involvement after the conclusion of formal 12-step interventions, which predicts poorer drinking outcomes over time. Modifications to TSF and other formal 12-step protocols or continued therapeutic contact may be necessary to sustain 12-step involvement and reduced drinking for patients with substance dependence and MDD.

  18. Multi-site study of diffusion metric variability: effects of site, vendor, field strength, and echo time on regions-of-interest and histogram-bin analyses.

    PubMed

    Helmer, K G; Chou, M-C; Preciado, R I; Gimi, B; Rollins, N K; Song, A; Turner, J; Mori, S

    2016-02-27

    It is now common for magnetic-resonance-imaging (MRI) based multi-site trials to include diffusion-weighted imaging (DWI) as part of the protocol. It is also common for these sites to possess MR scanners of different manufacturers, different software and hardware, and different software licenses. These differences mean that scanners may not be able to acquire data with the same number of gradient amplitude values and number of available gradient directions. Variability can also occur in achievable b-values and minimum echo times. The challenge of a multi-site study then, is to create a common protocol by understanding and then minimizing the effects of scanner variability and identifying reliable and accurate diffusion metrics. This study describes the effect of site, scanner vendor, field strength, and TE on two diffusion metrics: the first moment of the diffusion tensor field (mean diffusivity, MD), and the fractional anisotropy (FA) using two common analyses (region-of-interest and mean-bin value of whole brain histograms). The goal of the study was to identify sources of variability in diffusion-sensitized imaging and their influence on commonly reported metrics. The results demonstrate that the site, vendor, field strength, and echo time all contribute to variability in FA and MD, though to different extent. We conclude that characterization of the variability of DTI metrics due to site, vendor, field strength, and echo time is a worthwhile step in the construction of multi-center trials.

  19. Measurement effects of seasonal and monthly variability on pedometer-determined data.

    PubMed

    Kang, Minsoo; Bassett, David R; Barreira, Tiago V; Tudor-Locke, Catrine; Ainsworth, Barbara E

    2012-03-01

    The seasonal and monthly variability of pedometer-determined physical activity and its effects on accurate measurement have not been examined. The purpose of the study was to reduce measurement error in step-count data by controlling a) the length of the measurement period and b) the season or month of the year in which sampling was conducted. Twenty-three middle-aged adults were instructed to wear a Yamax SW-200 pedometer over 365 consecutive days. The step-count measurement periods of various lengths (eg, 2, 3, 4, 5, 6, 7 days, etc.) were randomly selected 10 times for each season and month. To determine accurate estimates of yearly step-count measurement, mean absolute percentage error (MAPE) and bias were calculated. The year-round average was considered as a criterion measure. A smaller MAPE and bias represent a better estimate. Differences in MAPE and bias among seasons were trivial; however, they varied among different months. The months in which seasonal changes occur presented the highest MAPE and bias. Targeting the data collection during certain months (eg, May) may reduce pedometer measurement error and provide more accurate estimates of year-round averages.

  20. Gait impairment precedes clinical symptoms in spinocerebellar ataxia type 6.

    PubMed

    Rochester, Lynn; Galna, Brook; Lord, Sue; Mhiripiri, Dadirayi; Eglon, Gail; Chinnery, Patrick F

    2014-02-01

    Spinocerebellar ataxia type 6 (SCA6) is an inherited ataxia with no established treatment. Gait ataxia is a prominent feature causing substantial disability. Understanding the evolution of the gait disturbance is a key step in developing treatment strategies. We studied 9 gait variables in 24 SCA6 (6 presymptomatic; 18 symptomatic) and 24 controls and correlated gait with clinical severity (presymptomatic and symptomatic). Discrete gait characteristics precede symptoms in SCA6 with significantly increased variability of step width and step time, whereas a more global gait deficit was evident in symptomatic individuals. Gait characteristics discriminated between presymptomatic and symptomatic individuals and were selectively associated with disease severity. This is the largest study to include a detailed characterization of gait in SCA6, including presymptomatic subjects, allowing changes across the disease spectrum to be compared. Selective gait disturbance is already present in SCA6 before clinical symptoms appear and gait characteristics are also sensitive to disease progression. Early gait disturbance likely reflects primary pathology distinct from secondary changes. These findings open the opportunity for early evaluation and sensitive measures of therapeutic efficacy using instrumented gait analysis which may have broader relevance for all degenerative ataxias. © 2013 Movement Disorder Society.

  1. Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps

    DOE PAGES

    Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer; ...

    2016-06-27

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of Decomposition Functional Types (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelersmore » and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. Lastly, these are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.« less

  2. Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps

    USGS Publications Warehouse

    Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer W.; Harmon, Mark E.; Hoffman, Forrest; Kumar, Jitendra; McGuire, Anthony David; Vargas, Rodrigo

    2016-01-01

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of “Decomposition Functional Types” (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelers and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.

  3. Control Software for Piezo Stepping Actuators

    NASA Technical Reports Server (NTRS)

    Shields, Joel F.

    2013-01-01

    A control system has been developed for the Space Interferometer Mission (SIM) piezo stepping actuator. Piezo stepping actuators are novel because they offer extreme dynamic range (centimeter stroke with nanometer resolution) with power, thermal, mass, and volume advantages over existing motorized actuation technology. These advantages come with the added benefit of greatly reduced complexity in the support electronics. The piezo stepping actuator consists of three fully redundant sets of piezoelectric transducers (PZTs), two sets of brake PZTs, and one set of extension PZTs. These PZTs are used to grasp and move a runner attached to the optic to be moved. By proper cycling of the two brake and extension PZTs, both forward and backward moves of the runner can be achieved. Each brake can be configured for either a power-on or power-off state. For SIM, the brakes and gate of the mechanism are configured in such a manner that, at the end of the step, the actuator is in a parked or power-off state. The control software uses asynchronous sampling of an optical encoder to monitor the position of the runner. These samples are timed to coincide with the end of the previous move, which may consist of a variable number of steps. This sampling technique linearizes the device by avoiding input saturation of the actuator and makes latencies of the plant vanish. The software also estimates, in real time, the scale factor of the device and a disturbance caused by cycling of the brakes. These estimates are used to actively cancel the brake disturbance. The control system also includes feedback and feedforward elements that regulate the position of the runner to a given reference position. Convergence time for smalland medium-sized reference positions (less than 200 microns) to within 10 nanometers can be achieved in under 10 seconds. Convergence times for large moves (greater than 1 millimeter) are limited by the step rate.

  4. Barriers to early cochlear implantation.

    PubMed

    Dettman, Shani; Choo, Dawn; Dowell, Richard

    2016-01-01

    Identify variables associated with paediatric access to cochlear implants (CIs). Part 1. Trends over time for age at CI surgery (N = 802) and age at hearing aid (HA) fitting (n = 487) were examined with regard to periods before, during, and after newborn hearing screening (NHS). Part 2. Demographic factors were explored for 417 children implanted under 3 years of age. Part 3. Pre-implant steps for the first 20 children to receive CIs under 12 months were examined. Part 1. Age at HA fitting and CI surgery reduced over time, and were associated with NHS implementation. Part 2. For children implanted under 3 years, earlier age at HA fitting and higher family socio-economic status were associated with earlier CI. Progressive hearing loss was associated with later CIs. Children with a Connexin 26 diagnosis received CIs earlier than children with a premature / low birth weight history. Part 3. The longest pre-CI steps were Step 1: Birth to diagnosis/identification of hearing loss (mean 16.43 weeks), and Step 11: MRI scans to implant surgery (mean 15.05 weeks) for the first 20 infants with CIs under 12 months. NHS implementation was associated with reductions in age at device intervention in this cohort.

  5. A drift-diffusion checkpoint model predicts a highly variable and growth-factor-sensitive portion of the cell cycle G1 phase.

    PubMed

    Jones, Zack W; Leander, Rachel; Quaranta, Vito; Harris, Leonard A; Tyson, Darren R

    2018-01-01

    Even among isogenic cells, the time to progress through the cell cycle, or the intermitotic time (IMT), is highly variable. This variability has been a topic of research for several decades and numerous mathematical models have been proposed to explain it. Previously, we developed a top-down, stochastic drift-diffusion+threshold (DDT) model of a cell cycle checkpoint and showed that it can accurately describe experimentally-derived IMT distributions [Leander R, Allen EJ, Garbett SP, Tyson DR, Quaranta V. Derivation and experimental comparison of cell-division probability densities. J. Theor. Biol. 2014;358:129-135]. Here, we use the DDT modeling approach for both descriptive and predictive data analysis. We develop a custom numerical method for the reliable maximum likelihood estimation of model parameters in the absence of a priori knowledge about the number of detectable checkpoints. We employ this method to fit different variants of the DDT model (with one, two, and three checkpoints) to IMT data from multiple cell lines under different growth conditions and drug treatments. We find that a two-checkpoint model best describes the data, consistent with the notion that the cell cycle can be broadly separated into two steps: the commitment to divide and the process of cell division. The model predicts one part of the cell cycle to be highly variable and growth factor sensitive while the other is less variable and relatively refractory to growth factor signaling. Using experimental data that separates IMT into G1 vs. S, G2, and M phases, we show that the model-predicted growth-factor-sensitive part of the cell cycle corresponds to a portion of G1, consistent with previous studies suggesting that the commitment step is the primary source of IMT variability. These results demonstrate that a simple stochastic model, with just a handful of parameters, can provide fundamental insights into the biological underpinnings of cell cycle progression.

  6. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE PAGES

    Finn, John M.

    2015-03-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less

  7. Fast exploration of an optimal path on the multidimensional free energy surface

    PubMed Central

    Chen, Changjun

    2017-01-01

    In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

  8. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng

    2006-12-01

    An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.

  9. A cross-sectional study of the individual, social, and built environmental correlates of pedometer-based physical activity among elementary school children.

    PubMed

    McCormack, Gavin R; Giles-Corti, Billie; Timperio, Anna; Wood, Georgina; Villanueva, Karen

    2011-04-12

    Children who participate in regular physical activity obtain health benefits. Preliminary pedometer-based cut-points representing sufficient levels of physical activity among youth have been established; however limited evidence regarding correlates of achieving these cut-points exists. The purpose of this study was to identify correlates of pedometer-based cut-points among elementary school-aged children. A cross-section of children in grades 5-7 (10-12 years of age) were randomly selected from the most (n = 13) and least (n = 12) 'walkable' public elementary schools (Perth, Western Australia), stratified by socioeconomic status. Children (n = 1480; response rate = 56.6%) and parents (n = 1332; response rate = 88.8%) completed a survey, and steps were collected from children using pedometers. Pedometer data were categorized to reflect the sex-specific pedometer-based cut-points of ≥15000 steps/day for boys and ≥12000 steps/day for girls. Associations between socio-demographic characteristics, sedentary and active leisure-time behavior, independent mobility, active transportation and built environmental variables - collected from the child and parent surveys - and meeting pedometer-based cut-points were estimated (odds ratios: OR) using generalized estimating equations. Overall 927 children participated in all components of the study and provided complete data. On average, children took 11407 ± 3136 steps/day (boys: 12270 ± 3350 vs. girls: 10681 ± 2745 steps/day; p < 0.001) and 25.9% (boys: 19.1 vs. girls: 31.6%; p < 0.001) achieved the pedometer-based cut-points.After adjusting for all other variables and school clustering, meeting the pedometer-based cut-points was negatively associated (p < 0.05) with being male (OR = 0.42), parent self-reported number of different destinations in the neighborhood (OR 0.93), and a friend's (OR 0.62) or relative's (OR 0.44, boys only) house being at least a 10-minute walk from home. Achieving the pedometer-based cut-points was positively associated with participating in screen-time < 2 hours/day (OR 1.88), not being driven to school (OR 1.48), attending a school located in a high SES neighborhood (OR 1.33), the average number of steps among children within the respondent's grade (for each 500 step/day increase: OR 1.29), and living further than a 10-minute walk from a relative's house (OR 1.69, girls only). Comprehensive multi-level interventions that reduce screen-time, encourage active travel to/from school and foster a physically active classroom culture might encourage more physical activity among children.

  10. A cross-sectional study of the individual, social, and built environmental correlates of pedometer-based physical activity among elementary school children

    PubMed Central

    2011-01-01

    Background Children who participate in regular physical activity obtain health benefits. Preliminary pedometer-based cut-points representing sufficient levels of physical activity among youth have been established; however limited evidence regarding correlates of achieving these cut-points exists. The purpose of this study was to identify correlates of pedometer-based cut-points among elementary school-aged children. Method A cross-section of children in grades 5-7 (10-12 years of age) were randomly selected from the most (n = 13) and least (n = 12) 'walkable' public elementary schools (Perth, Western Australia), stratified by socioeconomic status. Children (n = 1480; response rate = 56.6%) and parents (n = 1332; response rate = 88.8%) completed a survey, and steps were collected from children using pedometers. Pedometer data were categorized to reflect the sex-specific pedometer-based cut-points of ≥15000 steps/day for boys and ≥12000 steps/day for girls. Associations between socio-demographic characteristics, sedentary and active leisure-time behavior, independent mobility, active transportation and built environmental variables - collected from the child and parent surveys - and meeting pedometer-based cut-points were estimated (odds ratios: OR) using generalized estimating equations. Results Overall 927 children participated in all components of the study and provided complete data. On average, children took 11407 ± 3136 steps/day (boys: 12270 ± 3350 vs. girls: 10681 ± 2745 steps/day; p < 0.001) and 25.9% (boys: 19.1 vs. girls: 31.6%; p < 0.001) achieved the pedometer-based cut-points. After adjusting for all other variables and school clustering, meeting the pedometer-based cut-points was negatively associated (p < 0.05) with being male (OR = 0.42), parent self-reported number of different destinations in the neighborhood (OR 0.93), and a friend's (OR 0.62) or relative's (OR 0.44, boys only) house being at least a 10-minute walk from home. Achieving the pedometer-based cut-points was positively associated with participating in screen-time < 2 hours/day (OR 1.88), not being driven to school (OR 1.48), attending a school located in a high SES neighborhood (OR 1.33), the average number of steps among children within the respondent's grade (for each 500 step/day increase: OR 1.29), and living further than a 10-minute walk from a relative's house (OR 1.69, girls only). Conclusions Comprehensive multi-level interventions that reduce screen-time, encourage active travel to/from school and foster a physically active classroom culture might encourage more physical activity among children. PMID:21486475

  11. Selected questions on biomechanical exposures for surveillance of upper-limb work-related musculoskeletal disorders

    PubMed Central

    Descatha, Alexis; Roquelaure, Yves; Evanoff, Bradley; Niedhammer, Isabelle; Chastang, Jean François; Mariot, Camille; Ha, Catherine; Imbernon, Ellen; Goldberg, Marcel; Leclerc, Annette

    2007-01-01

    Objective Questionnaires for assessment of biomechanical exposure are frequently used in surveillance programs, though few studies have evaluated which key questions are needed. We sought to reduce the number of variables on a surveillance questionnaire by identifying which variables best summarized biomechanical exposure in a survey of the French working population. Methods We used data from the 2002–2003 French experimental network of Upper-limb work-related musculoskeletal disorders (UWMSD), performed on 2685 subjects in which 37 variables assessing biomechanical exposures were available (divided into four ordinal categories, according to the task frequency or duration). Principal Component Analysis (PCA) with orthogonal rotation was performed on these variables. Variables closely associated with factors issued from PCA were retained, except those highly correlated to another variable (rho>0.70). In order to study the relevance of the final list of variables, correlations between a score based on retained variables (PCA score) and the exposure score suggested by the SALTSA group were calculated. The associations between the PCA score and the prevalence of UWMSD were also studied. In a final step, we added back to the list a few variables not retained by PCA, because of their established recognition as risk factors. Results According to the results of the PCA, seven interpretable factors were identified: posture exposures, repetitiveness, handling of heavy loads, distal biomechanical exposures, computer use, forklift operator specific task, and recovery time. Twenty variables strongly correlated with the factors obtained from PCA were retained. The PCA score was strongly correlated both with the SALTSA score and with UWMSD prevalence (p<0.0001). In the final step, six variables were reintegrated. Conclusion Twenty-six variables out of 37 were efficiently selected according to their ability to summarize major biomechanical constraints in a working population, with an approach combining statistical analyses and existing knowledge. PMID:17476519

  12. Uncertainty analysis as essential step in the establishment of the dynamic Design Space of primary drying during freeze-drying.

    PubMed

    Mortier, Séverine Thérèse F C; Van Bockstal, Pieter-Jan; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2016-06-01

    Large molecules, such as biopharmaceuticals, are considered the key driver of growth for the pharmaceutical industry. Freeze-drying is the preferred way to stabilise these products when needed. However, it is an expensive, inefficient, time- and energy-consuming process. During freeze-drying, there are only two main process variables to be set, i.e. the shelf temperature and the chamber pressure, however preferably in a dynamic way. This manuscript focuses on the essential use of uncertainty analysis for the determination and experimental verification of the dynamic primary drying Design Space for pharmaceutical freeze-drying. Traditionally, the chamber pressure and shelf temperature are kept constant during primary drying, leading to less optimal process conditions. In this paper it is demonstrated how a mechanistic model of the primary drying step gives the opportunity to determine the optimal dynamic values for both process variables during processing, resulting in a dynamic Design Space with a well-known risk of failure. This allows running the primary drying process step as time efficient as possible, hereby guaranteeing that the temperature at the sublimation front does not exceed the collapse temperature. The Design Space is the multidimensional combination and interaction of input variables and process parameters leading to the expected product specifications with a controlled (i.e., high) probability. Therefore, inclusion of parameter uncertainty is an essential part in the definition of the Design Space, although it is often neglected. To quantitatively assess the inherent uncertainty on the parameters of the mechanistic model, an uncertainty analysis was performed to establish the borders of the dynamic Design Space, i.e. a time-varying shelf temperature and chamber pressure, associated with a specific risk of failure. A risk of failure acceptance level of 0.01%, i.e. a 'zero-failure' situation, results in an increased primary drying process time compared to the deterministic dynamic Design Space; however, the risk of failure is under control. Experimental verification revealed that only a risk of failure acceptance level of 0.01% yielded a guaranteed zero-defect quality end-product. The computed process settings with a risk of failure acceptance level of 0.01% resulted in a decrease of more than half of the primary drying time in comparison with a regular, conservative cycle with fixed settings. Copyright © 2016. Published by Elsevier B.V.

  13. Mars dust storms - Interannual variability and chaos

    NASA Technical Reports Server (NTRS)

    Ingersoll, Andrew P.; Lyons, James R.

    1993-01-01

    The hypothesis is that the global climate system, consisting of atmospheric dust interacting with the circulation, produces its own interannual variability when forced at the annual frequency. The model has two time-dependent variables representing the amount of atmospheric dust in the northern and southern hemispheres, respectively. Absorption of sunlight by the dust drives a cross-equatorial Hadley cell that brings more dust into the heated hemisphere. The circulation decays when the dust storm covers the globe. Interannual variability manifests itself either as a periodic solution in which the period is a multiple of the Martian year, or as an aperiodic (chaotic) solution that never repeats. Both kinds of solution are found in the model, lending support to the idea that interannual variability is an intrinsic property of the global climate system. The next step is to develop a hierarchy of dust-circulation models capable of being integrated for many years.

  14. The test-retest reliability and minimal detectable change of spatial and temporal gait variability during usual over-ground walking for younger and older adults.

    PubMed

    Almarwani, Maha; Perera, Subashan; VanSwearingen, Jessie M; Sparto, Patrick J; Brach, Jennifer S

    2016-02-01

    Gait variability is a marker of gait performance and future mobility status in older adults. Reliability of gait variability has been examined mainly in community dwelling older adults who are likely to fluctuate over time. The purpose of this study was to compare test-retest reliability and determine minimal detectable change (MDC) of spatial and temporal gait variability in younger and older adults. Forty younger (mean age=26.6 ± 6.0 years) and 46 older adults (mean age=78.1 ± 6.2 years) were included in the study. Gait characteristics were measured twice, approximately 1 week apart, using a computerized walkway (GaitMat II). Participants completed 4 passes on the GaitMat II at their self-selected walking speed. Test-retest reliability was calculated using Intra-class correlation coefficients (ICCs(2,1)), 95% limits of agreement (95% LoA) in conjunction with Bland-Altman plots, relative limits of agreement (LoA%) and standard error of measurement (SEM). The MDC at 90% and 95% level were also calculated. ICCs of gait variability ranged 0.26-0.65 in younger and 0.28-0.74 in older adults. The LoA% and SEM were consistently higher (i.e. less reliable) for all gait variables in older compared to younger adults except SEM for step width. The MDC was consistently larger for all gait variables in older compared to younger adults except step width. ICCs were of limited utility due to restricted ranges in younger adults. Based on absolute reliability measures and MDC, younger had greater test-retest reliability and smaller MDC of spatial and temporal gait variability compared to older adults. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Mission Assignment Model and Simulation Tool for Different Types of Unmanned Aerial Vehicles

    DTIC Science & Technology

    2008-09-01

    TABLE OF ABBREVIATIONS AND ACRONYMS AAA Anti Aircraft Artillery ATO Air Tasking Order BDA Battle Damage Assessment DES Discrete Event Simulation...clock is advanced in small, fixed time steps. Since the value of simulated time is important in DES , an internal variable, called as simulation clock...VEHICLES Yücel Alver Captain, Turkish Air Force B.S., Turkish Air Force Academy, 2000 Murat Özdoğan 1st Lieutenant, Turkish Air Force B.S., Turkish

  16. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  17. Identifying elderly people at risk for cognitive decline by using the 2-step test.

    PubMed

    Maruya, Kohei; Fujita, Hiroaki; Arai, Tomoyuki; Hosoi, Toshiki; Ogiwara, Kennichi; Moriyama, Shunnichiro; Ishibashi, Hideaki

    2018-01-01

    [Purpose] The purpose is to verify the effectiveness of the 2-step test in predicting cognitive decline in elderly individuals. [Subjects and Methods] One hundred eighty-two participants aged over 65 years underwent the 2-step test, cognitive function tests and higher level competence testing. Participants were classified as Robust, <1.3, and <1.1 using criteria regarding the locomotive syndrome risk stage for the 2-step test, variables were compared between groups. In addition, ordered logistic analysis was used to analyze cognitive functions as independent variables in the three groups, using the 2-step test results as the dependent variable, with age, gender, etc. as adjustment factors. [Results] In the crude data, the <1.3 and <1.1 groups were older and displayed lower motor and cognitive functions than did the Robust group. Furthermore, the <1.3 group exhibited significantly lower memory retention than did the Robust group. The 2-step test was related to the Stroop test (β: 0.06, 95% confidence interval: 0.01-0.12). [Conclusion] The finding is that the risk stage of the 2-step test is related to cognitive functions, even at an initial risk stage. The 2-step test may help with earlier detection and implementation of prevention measures for locomotive syndrome and mild cognitive impairment.

  18. Does the brain use sliding variables for the control of movements?

    PubMed

    Hanneton, S; Berthoz, A; Droulez, J; Slotine, J J

    1997-12-01

    Delays in the transmission of sensory and motor information prevent errors from being instantaneously available to the central nervous system (CNS) and can reduce the stability of a closed-loop control strategy. On the other hand, the use of a pure feedforward control (inverse dynamics) requires a perfect knowledge of the dynamic behavior of the body and of manipulated objects. Sensory feedback is essential both to accommodate unexpected errors and events and to compensate for uncertainties about the dynamics of the body. Experimental observations concerning the control of posture, gaze and limbs have shown that the CNS certainly uses a combination of closed-loop and open-loop control. Feedforward components of movement, such as eye saccades, occur intermittently and present a stereotyped kinematic profile. In visuo-manual tracking tasks, hand movements exhibit velocity peaks that occur intermittently. When a delay or a slow dynamics are inserted in the visuo-manual control loop, intermittent step-and-hold movements appear clearly in the hand trajectory. In this study, we investigated strategies used by human subjects involved in the control of a particular dynamic system. We found strong evidence for substantial nonlinearities in the commands produced. The presence of step-and-hold movements seemed to be the major source of nonlinearities in the control loop. Furthermore, the stereotyped ballistic-like kinematics of these rapid and corrective movements suggests that they were produced in an open-loop way by the CNS. We analyzed the generation of ballistic movements in the light of sliding control theory assuming that they occurred when a sliding variable exceeded a constant threshold. In this framework, a sliding variable is defined as a composite variable (a combination of the instantaneous tracking error and its temporal derivatives) that fulfills a specific stability criterion. Based on this hypothesis and on the assumption of a constant reaction time, the tracking error and its derivatives should be correlated at a particular time lag before movement onset. A peak of correlation was found for a physiologically plausible reaction time, corresponding to a stable composite variable. The direction and amplitude of the ongoing stereotyped movements seemed also be adjusted in order to minimize this variable. These findings suggest that, during visually guided movements, human subjects attempt to minimize such a composite variable and not the instantaneous error. This minimization seems to be obtained by the execution of stereotyped corrective movements.

  19. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    NASA Astrophysics Data System (ADS)

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim

    2017-02-01

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.

  20. Decadal predictions of Southern Ocean sea ice : testing different initialization methods with an Earth-system Model of Intermediate Complexity

    NASA Astrophysics Data System (ADS)

    Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana

    2013-04-01

    The sea ice extent in the Southern Ocean has increased since 1979 but the causes of this expansion have not been firmly identified. In particular, the contribution of internal variability and external forcing to this positive trend has not been fully established. In this region, the lack of observations and the overestimation of internal variability of the sea ice by contemporary General Circulation Models (GCMs) make it difficult to understand the behaviour of the sea ice. Nevertheless, if its evolution is governed by the internal variability of the system and if this internal variability is in some way predictable, a suitable initialization method should lead to simulations results that better fit the reality. Current GCMs decadal predictions are generally initialized through a nudging towards some observed fields. This relatively simple method does not seem to be appropriated to the initialization of sea ice in the Southern Ocean. The present study aims at identifying an initialization method that could improve the quality of the predictions of Southern Ocean sea ice at decadal timescales. We use LOVECLIM, an Earth-system Model of Intermediate Complexity that allows us to perform, within a reasonable computational time, the large amount of simulations required to test systematically different initialization procedures. These involve three data assimilation methods: a nudging, a particle filter and an efficient particle filter. In a first step, simulations are performed in an idealized framework, i.e. data from a reference simulation of LOVECLIM are used instead of observations, herein after called pseudo-observations. In this configuration, the internal variability of the model obviously agrees with the one of the pseudo-observations. This allows us to get rid of the issues related to the overestimation of the internal variability by models compared to the observed one. This way, we can work out a suitable methodology to assess the efficiency of the initialization procedures tested. It also allows us determine the upper limit of improvement that can be expected if more sophisticated initialization methods are used in decadal prediction simulations and if models have an internal variability agreeing with the observed one. Furthermore, since pseudo-observations are available everywhere at any time step, we also analyse the differences between simulations initialized with a complete dataset of pseudo-observations and the ones for which pseudo-observations data are not assimilated everywhere. In a second step, simulations are realized in a realistic framework, i.e. through the use of actual available observations. The same data assimilation methods are tested in order to check if more sophisticated methods can improve the reliability and the accuracy of decadal prediction simulations, even if they are performed with models that overestimate the internal variability of the sea ice extent in the Southern Ocean.

  1. The Motor and the Brake of the Trailing Leg in Human Walking: Leg Force Control Through Ankle Modulation and Knee Covariance

    PubMed Central

    Toney, Megan E.; Chang, Young-Hui

    2016-01-01

    Human walking is a complex task, and we lack a complete understanding of how the neuromuscular system organizes its numerous muscles and joints to achieve consistent and efficient walking mechanics. Focused control of select influential task-level variables may simplify the higher-level control of steady state walking and reduce demand on the neuromuscular system. As trailing leg power generation and force application can affect the mechanical efficiency of step-to-step transitions, we investigated how joint torques are organized to control leg force and leg power during human walking. We tested whether timing of trailing leg force control corresponded with timing of peak leg power generation. We also applied a modified uncontrolled manifold analysis to test whether individual or coordinated joint torque strategies most contributed to leg force control. We found that leg force magnitude was adjusted from step-to-step to maintain consistent leg power generation. Leg force modulation was primarily determined by adjustments in the timing of peak ankle plantar-flexion torque, while knee torque was simultaneously covaried to dampen the effect of ankle torque on leg force. We propose a coordinated joint torque control strategy in which the trailing leg ankle acts as a motor to drive leg power production while trailing leg knee torque acts as a brake to refine leg power production. PMID:27334888

  2. Kalman Filter Estimation of Spinning Spacecraft Attitude using Markley Variables

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph E.; Harman, Richard

    2004-01-01

    There are several different ways to represent spacecraft attitude and its time rate of change. For spinning or momentum-biased spacecraft, one particular representation has been put forward as a superior parameterization for numerical integration. Markley has demonstrated that these new variables have fewer rapidly varying elements for spinning spacecraft than other commonly used representations and provide advantages when integrating the equations of motion. The current work demonstrates how a Kalman filter can be devised to estimate the attitude using these new variables. The seven Markley variables are subject to one constraint condition, making the error covariance matrix singular. The filter design presented here explicitly accounts for this constraint by using a six-component error state in the filter update step. The reduced dimension error state is unconstrained and its covariance matrix is nonsingular.

  3. Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    1998-01-01

    BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.

  4. A Multi-Step Pathway Connecting Short Sleep Duration to Daytime Somnolence, Reduced Attention, and Poor Academic Performance: An Exploratory Cross-Sectional Study in Teenagers

    PubMed Central

    Perez-Lloret, Santiago; Videla, Alejandro J.; Richaudeau, Alba; Vigo, Daniel; Rossi, Malco; Cardinali, Daniel P.; Perez-Chada, Daniel

    2013-01-01

    Background: A multi-step causality pathway connecting short sleep duration to daytime somnolence and sleepiness leading to reduced attention and poor academic performance as the final result can be envisaged. However this hypothesis has never been explored. Objective: To explore consecutive correlations between sleep duration, daytime somnolence, attention levels, and academic performance in a sample of school-aged teenagers. Methods: We carried out a survey assessing sleep duration and daytime somnolence using the Pediatric Daytime Sleepiness Scale (PDSS). Sleep duration variables included week-days' total sleep time, usual bedtimes, and absolute weekdayto-weekend sleep time difference. Attention was assessed by d2 test and by the coding subtest from the WISC-IV scale. Academic performance was obtained from literature and math grades. Structural equation modeling was used to assess the independent relationships between these variables, while controlling for confounding effects of other variables, in one single model. Standardized regression weights (SWR) for relationships between these variables are reported. Results: Study sample included 1,194 teenagers (mean age: 15 years; range: 13-17 y). Sleep duration was inversely associated with daytime somnolence (SWR = -0.36, p < 0.01) while sleepiness was negatively associated with attention (SWR = -0.13, p < 0.01). Attention scores correlated positively with academic results (SWR = 0.18, p < 0.01). Daytime somnolence correlated negatively with academic achievements (SWR = -0.16, p < 0.01). The model offered an acceptable fit according to usual measures (RMSEA = 0.0548, CFI = 0.874, NFI = 0.838). A Sobel test confirmed that short sleep duration influenced attention through daytime somnolence (p < 0.02), which in turn influenced academic achievements through reduced attention (p < 0.002). Conclusions: Poor academic achievements correlated with reduced attention, which in turn was related to daytime somnolence. Somnolence correlated with short sleep duration. Citation: Perez-Lloret S; Videla AJ; Richaudeau A; Vigo D; Rossi M; Cardinali DP; Perez-Chada D. A multi-step pathway connecting short sleep duration to daytime somnolence, reduced attention, and poor academic performance: an exploratory cross-sectional study in teenagers. J Clin Sleep Med 2013;9(5):469-473. PMID:23674938

  5. A Two-Step Approach to Analyze Satisfaction Data

    ERIC Educational Resources Information Center

    Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.

    2011-01-01

    In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…

  6. A multiple-block multigrid method for the solution of the three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Atkins, Harold

    1991-01-01

    A multiple block multigrid method for the solution of the three dimensional Euler and Navier-Stokes equations is presented. The basic flow solver is a cell vertex method which employs central difference spatial approximations and Runge-Kutta time stepping. The use of local time stepping, implicit residual smoothing, multigrid techniques and variable coefficient numerical dissipation results in an efficient and robust scheme is discussed. The multiblock strategy places the block loop within the Runge-Kutta Loop such that accuracy and convergence are not affected by block boundaries. This has been verified by comparing the results of one and two block calculations in which the two block grid is generated by splitting the one block grid. Results are presented for both Euler and Navier-Stokes computations of wing/fuselage combinations.

  7. Effect of a lateral step-up exercise protocol on quadriceps and lower extremity performance.

    PubMed

    Worrell, T W; Borchert, B; Erner, K; Fritz, J; Leerar, P

    1993-12-01

    Closed kinetic chain exercises have been promoted as more functional and more appropriate than open kinetic chain exercises. Limited research exists demonstrating the effect of closed kinetic chain exercise on quadriceps and lower extremity performance. The purpose of this study was to determine the effect of a lateral step-up exercise protocol on isokinetic quadriceps peak torque and the following lower extremity activities: 1) leg press, 2) maximal step-up repetitions with body weight plus 25%, 3) hop for distance, and 4) 6-m timed hop. Twenty subjects participated in a 4-week training period, and 18 subjects served as controls. For the experimental group, a repeated measure ANOVA comparing pretest and posttest values revealed significant improvements in the leg press (p < or = .05), step-ups (p < or = .05), hop for distance (p < or = .05), and hop for time (p < or = .05) and no significant increase in isokinetic quadriceps peak torque (p > or = .05). Over the course of the training period, weight used for the step-up exercise increased (p < or = .05), repetitions decreased (p < or = .05), and step-up work did not change (p > or = .05). For the control group, no significant change (p > or = .05) occurred in any variable. The inability of the isokinetic dynamometer to detect increases in quadriceps performance is important because the isokinetic values are frequently used as criteria for return to functional activities. We conclude that closed kinetic chain testing and exercise provide additional means to assess and rehabilitate the lower extremity.

  8. Operational Demands of AAC Mobile Technology Applications on Programming Vocabulary and Engagement During Professional and Child Interactions.

    PubMed

    Caron, Jessica; Light, Janice; Drager, Kathryn

    2016-01-01

    Typically, the vocabulary in augmentative and alternative communication (AAC) technologies is pre-programmed by manufacturers or by parents and professionals outside of daily interactions. Because vocabulary needs are difficult to predict, young children who use aided AAC often do not have access to vocabulary concepts as the need and interest arises in their daily interactions, limiting their vocabulary acquisition and use. Ideally, parents and professionals would be able to add vocabulary to AAC technologies "just-in-time" as required during daily interactions. This study compared the effects of two AAC applications for mobile technologies: GoTalk Now (which required more programming steps) and EasyVSD (which required fewer programming steps) on the number of visual scene displays (VSDs) and hotspots created in 10-min interactions between eight professionals and preschool-aged children with typical development. The results indicated that, although all of the professionals were able to create VSDs and add vocabulary during interactions with the children, they created more VSDs and hotspots with the app with fewer programming steps than with the one with more steps, and child engagement and programming participation levels were high with both apps, but higher levels for both variables were observed with the app with fewer programming steps than with the one with more steps. These results suggest that apps with fewer programming steps may reduce operational demands and better support professionals to (a) respond to the child's input, (b) use just-in-time programming during interactions, (c) provide access to more vocabulary, and (d) increase participation.

  9. Clustering of Variables for Mixed Data

    NASA Astrophysics Data System (ADS)

    Saracco, J.; Chavent, M.

    2016-05-01

    This chapter presents clustering of variables which aim is to lump together strongly related variables. The proposed approach works on a mixed data set, i.e. on a data set which contains numerical variables and categorical variables. Two algorithms of clustering of variables are described: a hierarchical clustering and a k-means type clustering. A brief description of PCAmix method (that is a principal component analysis for mixed data) is provided, since the calculus of the synthetic variables summarizing the obtained clusters of variables is based on this multivariate method. Finally, the R packages ClustOfVar and PCAmixdata are illustrated on real mixed data. The PCAmix and ClustOfVar approaches are first used for dimension reduction (step 1) before applying in step 2 a standard clustering method to obtain groups of individuals.

  10. Uncertainty Quantification of Water Quality in Tamsui River in Taiwan

    NASA Astrophysics Data System (ADS)

    Kao, D.; Tsai, C.

    2017-12-01

    In Taiwan, modeling of non-point source pollution is unavoidably associated with uncertainty. The main purpose of this research is to better understand water contamination in the metropolitan Taipei area, and also to provide a new analysis method for government or companies to establish related control and design measures. In this research, three methods are utilized to carry out the uncertainty analysis step by step with Mike 21, which is widely used for hydro-dynamics and water quality modeling, and the study area is focused on Tamsui river watershed. First, a sensitivity analysis is conducted which can be used to rank the order of influential parameters and variables such as Dissolved Oxygen, Nitrate, Ammonia and Phosphorous. Then we use the First-order error method (FOEA) to determine the number of parameters that could significantly affect the variability of simulation results. Finally, a state-of-the-art method for uncertainty analysis called the Perturbance moment method (PMM) is applied in this research, which is more efficient than the Monte-Carlo simulation (MCS). For MCS, the calculations may become cumbersome when involving multiple uncertain parameters and variables. For PMM, three representative points are used for each random variable, and the statistical moments (e.g., mean value, standard deviation) for the output can be presented by the representative points and perturbance moments based on the parallel axis theorem. With the assumption of the independent parameters and variables, calculation time is significantly reduced for PMM as opposed to MCS for a comparable modeling accuracy.

  11. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Raj, R.; Hamm, N. A. S.; van der Tol, C.; Stein, A.

    2015-08-01

    Gross primary production (GPP), separated from flux tower measurements of net ecosystem exchange (NEE) of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH) model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC) simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  12. Harvesting model uncertainty for the simulation of interannual variability

    NASA Astrophysics Data System (ADS)

    Misra, Vasubandhu

    2009-08-01

    An innovative modeling strategy is introduced to account for uncertainty in the convective parameterization (CP) scheme of a coupled ocean-atmosphere model. The methodology involves calling the CP scheme several times at every given time step of the model integration to pick the most probable convective state. Each call of the CP scheme is unique in that one of its critical parameter values (which is unobserved but required by the scheme) is chosen randomly over a given range. This methodology is tested with the relaxed Arakawa-Schubert CP scheme in the Center for Ocean-Land-Atmosphere Studies (COLA) coupled general circulation model (CGCM). Relative to the control COLA CGCM, this methodology shows improvement in the El Niño-Southern Oscillation simulation and the Indian summer monsoon precipitation variability.

  13. Age-related changes in spatiotemporal characteristics of gait accompany ongoing lower limb linear growth in late childhood and early adolescence.

    PubMed

    Froehle, Andrew W; Nahhas, Ramzi W; Sherwood, Richard J; Duren, Dana L

    2013-05-01

    Walking gait is generally held to reach maturity, including walking at adult-like velocities, by 7-8 years of age. Lower limb length, however, is a major determinant of gait, and continues to increase until 13-15 years of age. This study used a sample from the Fels Longitudinal Study (ages 8-30 years) to test the hypothesis that walking with adult-like velocity on immature lower limbs results in the retention of immature gait characteristics during late childhood and early adolescence. There was no relationship between walking velocity and age in this sample, whereas the lower limb continued to grow, reaching maturity at 13.2 years in females and 15.6 years in males. Piecewise linear mixed models regression analysis revealed significant age-related trends in normalized cadence, initial double support time, single support time, base of support, and normalized step length in both sexes. Each trend reached its own, variable-specific age at maturity, after which the gait variables' relationships with age reached plateaus and did not differ significantly from zero. Offsets in ages at maturity occurred among the gait variables, and between the gait variables and lower limb length. The sexes also differed in their patterns of maturation. Generally, however, immature walkers of both sexes took more frequent and relatively longer steps than did mature walkers. These results support the hypothesis that maturational changes in gait accompany ongoing lower limb growth, with implications for diagnosing, preventing, and treating movement-related disorders and injuries during late childhood and early adolescence. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Carbon fluxes in tropical forest ecosystems: the value of Eddy-covariance data for individual-based dynamic forest gap models

    NASA Astrophysics Data System (ADS)

    Roedig, Edna; Cuntz, Matthias; Huth, Andreas

    2015-04-01

    The effects of climatic inter-annual fluctuations and human activities on the global carbon cycle are uncertain and currently a major issue in global vegetation models. Individual-based forest gap models, on the other hand, model vegetation structure and dynamics on a small spatial (<100 ha) and large temporal scale (>1000 years). They are well-established tools to reproduce successions of highly-diverse forest ecosystems and investigate disturbances as logging or fire events. However, the parameterizations of the relationships between short-term climate variability and forest model processes are often uncertain in these models (e.g. daily variable temperature and gross primary production (GPP)) and cannot be constrained from forest inventories. We addressed this uncertainty and linked high-resolution Eddy-covariance (EC) data with an individual-based forest gap model. The forest model FORMIND was applied to three diverse tropical forest sites in the Amazonian rainforest. Species diversity was categorized into three plant functional types. The parametrizations for the steady-state of biomass and forest structure were calibrated and validated with different forest inventories. The parameterizations of relationships between short-term climate variability and forest model processes were evaluated with EC-data on a daily time step. The validations of the steady-state showed that the forest model could reproduce biomass and forest structures from forest inventories. The daily estimations of carbon fluxes showed that the forest model reproduces GPP as observed by the EC-method. Daily fluctuations of GPP were clearly reflected as a response to daily climate variability. Ecosystem respiration remains a challenge on a daily time step due to a simplified soil respiration approach. In the long-term, however, the dynamic forest model is expected to estimate carbon budgets for highly-diverse tropical forests where EC-measurements are rare.

  15. A Systematic Approach to the Study of Accelerated weathering of Building Joint Sealants

    Treesearch

    Christopher C. White; Donald L. Hunston; Kar Tean Tan; James J. Filliben; Adam L. Pintar; Greg Schueneman

    2012-01-01

    An accurate service life prediction model is needed for building joint sealants in order to greatly reduce the time to market of a new product and reduce the risk of introducing a poorly performing product into the marketplace. A stepping stone to the success of this effort is the precise control of environmental variables in a laboratory accelerated test apparatus in...

  16. Design, development, and application of LANDIS-II, a spatial landscape simulation model with flexible temporal and spatial resolution

    Treesearch

    Robert M. Scheller; James B. Domingo; Brian R. Sturtevant; Jeremy S. Williams; Arnold Rudy; Eric J. Gustafson; David J. Mladenoff

    2007-01-01

    We introduce LANDIS-II, a landscape model designed to simulate forest succession and disturbances. LANDIS-II builds upon and preserves the functionality of previous LANDIS forest landscape simulation models. LANDIS-II is distinguished by the inclusion of variable time steps for different ecological processes; our use of a rigorous development and testing process used...

  17. SENSITIVITY OF THE REGIONAL WATER BALANCE IN THE COLUMBIA RIVER BASIN TO CLIMATE VARIABILITY: APPLICATION OF A SPATIALLY DISTRIBUTED WATER BALANCE MODEL

    EPA Science Inventory

    A one-dimensional water balance model was developed and used to simulate water balance for the Columbia River Basin. he model was run over a 10 km X 10 km grid for the United State's portion of the basin. he regional water balance was calculated using a monthly time-step for a re...

  18. Determining Directional Dependency in Causal Associations

    PubMed Central

    Pornprasertmanit, Sunthud; Little, Todd D.

    2014-01-01

    Directional dependency is a method to determine the likely causal direction of effect between two variables. This article aims to critique and improve upon the use of directional dependency as a technique to infer causal associations. We comment on several issues raised by von Eye and DeShon (2012), including: encouraging the use of the signs of skewness and excessive kurtosis of both variables, discouraging the use of D’Agostino’s K2, and encouraging the use of directional dependency to compare variables only within time points. We offer improved steps for determining directional dependency that fix the problems we note. Next, we discuss how to integrate directional dependency into longitudinal data analysis with two variables. We also examine the accuracy of directional dependency evaluations when several regression assumptions are violated. Directional dependency can suggest the direction of a relation if (a) the regression error in population is normal, (b) an unobserved explanatory variable correlates with any variables equal to or less than .2, (c) a curvilinear relation between both variables is not strong (standardized regression coefficient ≤ .2), (d) there are no bivariate outliers, and (e) both variables are continuous. PMID:24683282

  19. Effects of the voltage and time of anodization on modulation of the pore dimensions of AAO films for nanomaterials synthesis

    NASA Astrophysics Data System (ADS)

    Chahrour, Khaled M.; Ahmed, Naser M.; Hashim, M. R.; Elfadill, Nezar G.; Maryam, W.; Ahmad, M. A.; Bououdina, M.

    2015-12-01

    Highly-ordered and hexagonal-shaped nanoporous anodic aluminum oxide (AAO) of 1 μm thickness of Al pre-deposited onto Si substrate using two-step anodization was successfully fabricated. The growth mechanism of the porous AAO film was investigated by anodization current-time behavior for different anodizing voltages and by visualizing the microstructural procedure of the fabrication of AAO film by two-step anodization using cross-sectional and top view of FESEM imaging. Optimum conditions of the process variables such as annealing time of the as-deposited Al thin film and pore widening time of porous AAO film were experimentally determined to obtain AAO films with uniformly distributed and vertically aligned porous microstructure. Pores with diameter ranging from 50 nm to 110 nm and thicknesses between 250 nm and 1400 nm, were obtained by controlling two main influential anodization parameters: the anodizing voltage and time of the second-step anodization. X-ray diffraction analysis reveals amorphous-to-crystalline phase transformation after annealing at temperatures above 800 °C. AFM images show optimum ordering of the porous AAO film anodized under low voltage condition. AAO films may be exploited as templates with desired size distribution for the fabrication of CuO nanorod arrays. Such nanostructured materials exhibit unique properties and hold high potential for nanotechnology devices.

  20. Time Is Not on Our Side: How Radiology Practices Should Manage Customer Queues.

    PubMed

    Loving, Vilert A; Ellis, Richard L; Rippee, Robert; Steele, Joseph R; Schomer, Donald F; Shoemaker, Stowe

    2017-11-01

    As health care shifts toward patient-centered care, wait times have received increasing scrutiny as an important metric for patient satisfaction. Long queues form when radiology practices inefficiently service their customers, leading to customer dissatisfaction and a lower perception of value. This article describes a four-step framework for radiology practices to resolve problematic queues: (1) analyze factors contributing to queue formation; (2) improve processes to reduce service times; (3) reduce variability; (4) address the psychology of queues. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  1. International Conference on Computing Methods in Applied Sciences and Engineering (9th) Held in Paris, France on 29 January-2 February 1990

    DTIC Science & Technology

    1990-02-02

    National Aero-Space Plane NTC no time counter TSS-2 Tethered Satellite System - 2 VHS variable hard sphere VSL viscous shock-layer Introduction With...required at each time step to evaluate the mass fractions Yi+’ it is shown in [21] that the matrix of this linear system is an M-matrix (see e.g. [42]), and...first rewrite system (4.7)- (4.8) under the following form, separating the time -dependent, convective, diffusive and reactive terms: VW’ + F(W)r + G(,W

  2. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  3. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  4. Spatiotemporal dynamics of surface water networks across a global biodiversity hotspot—implications for conservation

    NASA Astrophysics Data System (ADS)

    Tulbure, Mirela G.; Kininmonth, Stuart; Broich, Mark

    2014-11-01

    The concept of habitat networks represents an important tool for landscape conservation and management at regional scales. Previous studies simulated degradation of temporally fixed networks but few quantified the change in network connectivity from disintegration of key features that undergo naturally occurring spatiotemporal dynamics. This is particularly of concern for aquatic systems, which typically show high natural spatiotemporal variability. Here we focused on the Swan Coastal Plain, a bioregion that encompasses a global biodiversity hotspot in Australia with over 1500 water bodies of high biodiversity. Using graph theory, we conducted a temporal analysis of water body connectivity over 13 years of variable climate. We derived large networks of surface water bodies using Landsat data (1999-2011). We generated an ensemble of 278 potential networks at three dispersal distances approximating the maximum dispersal distance of different water dependent organisms. We assessed network connectivity through several network topology metrics and quantified the resilience of the network topology during wet and dry phases. We identified ‘stepping stone’ water bodies across time and compared our networks with theoretical network models with known properties. Results showed a highly dynamic seasonal pattern of variability in network topology metrics. A decline in connectivity over the 13 years was noted with potential negative consequences for species with limited dispersal capacity. The networks described here resemble theoretical scale-free models, also known as ‘rich get richer’ algorithm. The ‘stepping stone’ water bodies are located in the area around the Peel-Harvey Estuary, a Ramsar listed site, and some are located in a national park. Our results describe a powerful approach that can be implemented when assessing the connectivity for a particular organism with known dispersal distance. The approach of identifying the surface water bodies that act as ‘stepping stone’ over time may help prioritize surface water bodies that are essential for maintaining regional scale connectivity.

  5. Time-derivative preconditioning for viscous flows

    NASA Technical Reports Server (NTRS)

    Choi, Yunho; Merkle, Charles L.

    1991-01-01

    A time-derivative preconditioning algorithm that is effective over a wide range of flow conditions from inviscid to very diffusive flows and from low speed to supersonic flows was developed. This algorithm uses a viscous set of primary dependent variables to introduce well-conditioned eigenvalues and to avoid having a nonphysical time reversal for viscous flow. The resulting algorithm also provides a mechanism for controlling the inviscid and viscous time step parameters to be of order one for very diffusive flows, thereby ensuring rapid convergence at very viscous flows as well as for inviscid flows. Convergence capabilities are demonstrated through computation of a wide variety of problems.

  6. A Numerical Scheme for Ordinary Differential Equations Having Time Varying and Nonlinear Coefficients Based on the State Transition Matrix

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2002-01-01

    A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.

  7. Dynamic investigation of the role of the surface sulfates in NO{sub x} reduction and SO{sub 2} oxidation over V{sub 2}O{sub 5}-WO{sub 3}/TiO{sub 2} catalysts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orsenigo, C.; Lietti, L.; Tronconi, E.

    1998-06-01

    Transient experiments performed over synthesized and commercial V{sub 2}O{sub 5}-WO{sub 3}/TiO{sub 2} catalysts during catalyst conditioning and during step changes of the operating variables (SO{sub 2} inlet concentration and temperature) show that conditioning of the catalyst is required to attain significant and reproducible steady-state data in both the reduction of NO{sub x} and the oxidation of SO{sub 2}. The response time of conditioning for NO{sub x} reduction is of a few hours and that for SO{sub 2} oxidation is of several hours. Fourier transform infrared spectroscopy temperature programmed decomposition, and thermogravimetric measurements showed that catalyst conditioning is associated with amore » slow process of buildup of sulfates: the different characteristic conditioning times observed in the reduction of NO{sub x} and in the oxidation of SO{sub 2} suggest that the buildup of sulfates occurs first at the vanadyl sites and later on at the exposed titania surface. Formation of sulfates at or near the vanadyl sites increases the reactivity in the de-NO{sub x} reaction, possibly due to the increase in the Broensted and Lewis acidity of the catalyst, whereas the titania surface acts as SO{sub 3} acceptor and affects the outlet SO{sub 3} concentration during catalyst conditioning for the SO{sub 2} oxidation reaction. The response time to step changes in SO{sub 2} concentration and temperature is of a few hours in the case of SO{sub 2} oxidation and much shorter in the case of NO{sub x} reduction. The different time responses associated with conditioning and with step changes in the settings of the operating variables have been rationalized in terms of the different extent of perturbation of the sulfate coverage experienced by the catalyst.« less

  8. Method of controlling a variable geometry type turbocharger

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirabayashi, Y.

    1988-08-23

    This patent describes a method of controlling the supercharging pressure of a variable geometry type turbocharger having a bypass, comprising the following steps which are carried out successively: receiving signals from an engine speed sensor and from an engine knocking sensor; receiving a signal from a throttle valve sensor; judging whether or not an engine is being accelerated, and proceeding to step below if the engine is being accelerated and to step below if the engine is not being accelerated, i.e., if the engine is in a constant speed operation; determining a first correction value and proceeding to step below;more » judging whether or not the engine is knocking, and proceeding to step (d) if knocking is occurring and to step (f) below if no knocking is occurring; determining a second correction value and proceeding to step; receiving signals from the engine speed sensor and from an airflow meter which measures the quantity of airflow to be supplied to the engine; calculating an airflow rate per engine revolution; determining a duty valve according to the calculated airflow rate; transmitting the corrected duty value to control means for controlling the geometry of the variable geometry type turbocharger and the opening of bypass of the turbocharger, thereby controlling the supercharging pressure of the turbocharger.« less

  9. Kinematic Constraints Associated with the Acquisition of Overarm Throwing Part I: Step and Trunk Actions

    ERIC Educational Resources Information Center

    Stodden, David F.; Langendorfer, Stephen J.; Fleisig, Glenn S.; Andrews, James R.

    2006-01-01

    The purposes of this study were to: (a) examine differences within specific kinematic variables and ball velocity associated with developmental component levels of step and trunk action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine…

  10. Sustainability of a Targeted Intervention Package: First Step to Success in Oregon

    ERIC Educational Resources Information Center

    Loman, Sheldon L.; Rodriguez, Billie Jo; Horner, Robert H.

    2010-01-01

    Variables affecting the sustained implementation of evidence-based practices are receiving increased attention. A descriptive analysis of the variables associated with sustained implementation of First Step to Success (FSS), a targeted intervention for young students at risk for behavior disorders, is provided. Measures based on a conceptual model…

  11. Predictors of posttraumatic stress symptoms following childbirth

    PubMed Central

    2014-01-01

    Background Posttraumatic stress disorder (PTSD) following childbirth has gained growing attention in the recent years. Although a number of predictors for PTSD following childbirth have been identified (e.g., history of sexual trauma, emergency caesarean section, low social support), only very few studies have tested predictors derived from current theoretical models of the disorder. This study first aimed to replicate the association of PTSD symptoms after childbirth with predictors identified in earlier research. Second, cognitive predictors derived from Ehlers and Clark’s (2000) model of PTSD were examined. Methods N = 224 women who had recently given birth completed an online survey. In addition to computing single correlations between PTSD symptom severities and variables of interest, in a hierarchical multiple regression analyses posttraumatic stress symptoms were predicted by (1) prenatal variables, (2) birth-related variables, (3) postnatal social support, and (4) cognitive variables. Results Wellbeing during pregnancy and age were the only prenatal variables contributing significantly to the explanation of PTSD symptoms in the first step of the regression analysis. In the second step, the birth-related variables peritraumatic emotions and wellbeing during childbed significantly increased the explanation of variance. Despite showing significant bivariate correlations, social support entered in the third step did not predict PTSD symptom severities over and above the variables included in the first two steps. However, with the exception of peritraumatic dissociation all cognitive variables emerged as powerful predictors and increased the amount of variance explained from 43% to a total amount of 68%. Conclusions The findings suggest that the prediction of PTSD following childbirth can be improved by focusing on variables derived from a current theoretical model of the disorder. PMID:25026966

  12. A descriptive study of step alignment and foot positioning relative to the tee by professional rugby union goal-kickers.

    PubMed

    Cockcroft, John; Van Den Heever, Dawie

    2016-01-01

    This study describes foot positioning during the final two steps of the approach to the ball amongst professional rugby goal-kickers. A 3D optical motion capture system was used to test 15 goal-kickers performing 10 goal-kicks. The distance and direction of each step, as well as individual foot contact positions relative to the tee, were measured. The intra- and inter-subject variability was calculated as well as the correlation (Pearson) between the measurements and participant anthropometrics. Inter-subject variability for the final foot position was lowest (placed 0.03 ± 0.07 m behind and 0.33 ± 0.03 m lateral to the tee) and highest for the penultimate step distance (0.666 ± 0.149 m), performed at an angle of 36.1 ± 8.5° external to the final step. The final step length was 1.523 ± 0.124 m, executed at an external angle of 35.5 ± 7.4° to the target line. The intra-subject variability was very low; distances and angles for the 10 kicks varied per participant by 1.6-3.1 cm and 0.7-1.6°, respectively. The results show that even though the participants had variability in their run-up to the tee, final foot position next to the tee was very similar and consistent. Furthermore, the inter- and intra-subject variability could not be attributed to differences in anthropometry. These findings may be useful as normative reference data for coaching, although further work is required to understand the role of other factors such as approach speed and body alignment.

  13. Robotic partial nephrectomy shortens warm ischemia time, reducing suturing time kinetics even for an experienced laparoscopic surgeon: a comparative analysis.

    PubMed

    Faria, Eliney F; Caputo, Peter A; Wood, Christopher G; Karam, Jose A; Nogueras-González, Graciela M; Matin, Surena F

    2014-02-01

    Laparoscopic and robotic partial nephrectomy (LPN and RPN) are strongly related to influence of tumor complexity and learning curve. We analyzed a consecutive experience between RPN and LPN to discern if warm ischemia time (WIT) is in fact improved while accounting for these two confounding variables and if so by which particular aspect of WIT. This is a retrospective analysis of consecutive procedures performed by a single surgeon between 2002-2008 (LPN) and 2008-2012 (RPN). Specifically, individual steps, including tumor excision, suturing of intrarenal defect, and parenchyma, were recorded at the time of surgery. Multivariate and univariate analyzes were used to evaluate influence of learning curve, tumor complexity, and time kinetics of individual steps during WIT, to determine their influence in WIT. Additionally, we considered the effect of RPN on the learning curve. A total of 146 LPNs and 137 RPNs were included. Considering renal function, WIT, suturing time, renorrhaphy time were found statistically significant differences in favor of RPN (p < 0.05). In the univariate analysis, surgical procedure, learning curve, clinical tumor size, and RENAL nephrometry score were statistically significant predictors for WIT (p < 0.05). RPN decreased the WIT on average by approximately 7 min compared to LPN even when adjusting for learning curve, tumor complexity, and both together (p < 0.001). We found RPN was associated with a shorter WIT when controlling for influence of the learning curve and tumor complexity. The time required for tumor excision was not shortened but the time required for suturing steps was significantly shortened.

  14. Gaze shifts during dual-tasking stair descent.

    PubMed

    Miyasike-daSilva, Veronica; McIlroy, William E

    2016-11-01

    To investigate the role of vision in stair locomotion, young adults descended a seven-step staircase during unrestricted walking (CONTROL), and while performing a concurrent visual reaction time (RT) task displayed on a monitor. The monitor was located at either 3.5 m (HIGH) or 0.5 m (LOW) above ground level at the end of the stairway, which either restricted (HIGH) or facilitated (LOW) the view of the stairs in the lower field of view as participants walked downstairs. Downward gaze shifts (recorded with an eye tracker) and gait speed were significantly reduced in HIGH and LOW compared with CONTROL. Gaze and locomotor behaviour were not different between HIGH and LOW. However, inter-individual variability increased in HIGH, in which participants combined different response characteristics including slower walking, handrail use, downward gaze, and/or increasing RTs. The fastest RTs occurred in the midsteps (non-transition steps). While gait and visual task performance were not statistically different prior to the top and bottom transition steps, gaze behaviour and RT were more variable prior to transition steps in HIGH. This study demonstrated that, in the presence of a visual task, people do not look down as often when walking downstairs and require minimum adjustments provided that the view of the stairs is available in the lower field of view. The middle of the stairs seems to require less from executive function, whereas visual attention appears a requirement to detect the last transition via gaze shifts or peripheral vision.

  15. GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California

    NASA Astrophysics Data System (ADS)

    Hammond, W. C.; Blewitt, G.

    2016-12-01

    The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal magma chamber. Contemporary time-variable hazard can be estimated from the time variable slip rate estimated from the evolving GPS velocity field.

  16. A dataset of future daily weather data for crop modelling over Europe derived from climate change scenarios

    NASA Astrophysics Data System (ADS)

    Duveiller, G.; Donatelli, M.; Fumagalli, D.; Zucchini, A.; Nelson, R.; Baruth, B.

    2017-02-01

    Coupled atmosphere-ocean general circulation models (GCMs) simulate different realizations of possible future climates at global scale under contrasting scenarios of land-use and greenhouse gas emissions. Such data require several additional processing steps before it can be used to drive impact models. Spatial downscaling, typically by regional climate models (RCM), and bias-correction are two such steps that have already been addressed for Europe. Yet, the errors in resulting daily meteorological variables may be too large for specific model applications. Crop simulation models are particularly sensitive to these inconsistencies and thus require further processing of GCM-RCM outputs. Moreover, crop models are often run in a stochastic manner by using various plausible weather time series (often generated using stochastic weather generators) to represent climate time scale for a period of interest (e.g. 2000 ± 15 years), while GCM simulations typically provide a single time series for a given emission scenario. To inform agricultural policy-making, data on near- and medium-term decadal time scale is mostly requested, e.g. 2020 or 2030. Taking a sample of multiple years from these unique time series to represent time horizons in the near future is particularly problematic because selecting overlapping years may lead to spurious trends, creating artefacts in the results of the impact model simulations. This paper presents a database of consolidated and coherent future daily weather data for Europe that addresses these problems. Input data consist of daily temperature and precipitation from three dynamically downscaled and bias-corrected regional climate simulations of the IPCC A1B emission scenario created within the ENSEMBLES project. Solar radiation is estimated from temperature based on an auto-calibration procedure. Wind speed and relative air humidity are collected from historical series. From these variables, reference evapotranspiration and vapour pressure deficit are estimated ensuring consistency within daily records. The weather generator ClimGen is then used to create 30 synthetic years of all variables to characterize the time horizons of 2000, 2020 and 2030, which can readily be used for crop modelling studies.

  17. Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing.

    PubMed

    Einhäuser, Wolfgang; Nuthmann, Antje

    2016-09-01

    During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.

  18. Development of a fractional-step method for the unsteady incompressible Navier-Stokes equations in generalized coordinate systems

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe; Kwak, Dochan; Vinokur, Marcel

    1992-01-01

    A fractional step method is developed for solving the time-dependent three-dimensional incompressible Navier-Stokes equations in generalized coordinate systems. The primitive variable formulation uses the pressure, defined at the center of the computational cell, and the volume fluxes across the faces of the cells as the dependent variables, instead of the Cartesian components of the velocity. This choice is equivalent to using the contravariant velocity components in a staggered grid multiplied by the volume of the computational cell. The governing equations are discretized by finite volumes using a staggered mesh system. The solution of the continuity equation is decoupled from the momentum equations by a fractional step method which enforces mass conservation by solving a Poisson equation. This procedure, combined with the consistent approximations of the geometric quantities, is done to satisfy the discretized mass conservation equation to machine accuracy, as well as to gain the favorable convergence properties of the Poisson solver. The momentum equations are solved by an approximate factorization method, and a novel ZEBRA scheme with four-color ordering is devised for the efficient solution of the Poisson equation. Several two- and three-dimensional laminar test cases are computed and compared with other numerical and experimental results to validate the solution method. Good agreement is obtained in all cases.

  19. Comparison between variable and fixed dwell-time PN acquisition algorithms. [for synchronization in pseudonoise spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Braun, W. R.

    1981-01-01

    Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.

  20. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  1. A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann

    2003-01-01

    A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.

  2. Synthesis of walking sounds for alleviating gait disturbances in Parkinson's disease.

    PubMed

    Rodger, Matthew W M; Young, William R; Craig, Cathy M

    2014-05-01

    Managing gait disturbances in people with Parkinson's disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain nonpharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson's to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson's disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson's are also discussed.

  3. A high space-time resolution dataset linking meteorological forcing and hydro-sedimentary response in a mesoscale Mediterranean catchment (Auzon) of the Ardèche region, France

    NASA Astrophysics Data System (ADS)

    Nord, Guillaume; Boudevillain, Brice; Berne, Alexis; Branger, Flora; Braud, Isabelle; Dramais, Guillaume; Gérard, Simon; Le Coz, Jérôme; Legoût, Cédric; Molinié, Gilles; Van Baelen, Joel; Vandervaere, Jean-Pierre; Andrieu, Julien; Aubert, Coralie; Calianno, Martin; Delrieu, Guy; Grazioli, Jacopo; Hachani, Sahar; Horner, Ivan; Huza, Jessica; Le Boursicaud, Raphaël; Raupach, Timothy H.; Teuling, Adriaan J.; Uber, Magdalena; Vincendon, Béatrice; Wijbrans, Annette

    2017-03-01

    A comprehensive hydrometeorological dataset is presented spanning the period 1 January 2011-31 December 2014 to improve the understanding of the hydrological processes leading to flash floods and the relation between rainfall, runoff, erosion and sediment transport in a mesoscale catchment (Auzon, 116 km2) of the Mediterranean region. Badlands are present in the Auzon catchment and well connected to high-gradient channels of bedrock rivers which promotes the transfer of suspended solids downstream. The number of observed variables, the various sensors involved (both in situ and remote) and the space-time resolution ( ˜ km2, ˜ min) of this comprehensive dataset make it a unique contribution to research communities focused on hydrometeorology, surface hydrology and erosion. Given that rainfall is highly variable in space and time in this region, the observation system enables assessment of the hydrological response to rainfall fields. Indeed, (i) rainfall data are provided by rain gauges (both a research network of 21 rain gauges with a 5 min time step and an operational network of 10 rain gauges with a 5 min or 1 h time step), S-band Doppler dual-polarization radars (1 km2, 5 min resolution), disdrometers (16 sensors working at 30 s or 1 min time step) and Micro Rain Radars (5 sensors, 100 m height resolution). Additionally, during the special observation period (SOP-1) of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) project, two X-band radars provided precipitation measurements at very fine spatial and temporal scales (1 ha, 5 min). (ii) Other meteorological data are taken from the operational surface weather observation stations of Météo-France (including 2 m air temperature, atmospheric pressure, 2 m relative humidity, 10 m wind speed and direction, global radiation) at the hourly time resolution (six stations in the region of interest). (iii) The monitoring of surface hydrology and suspended sediment is multi-scale and based on nested catchments. Three hydrometric stations estimate water discharge at a 2-10 min time resolution. Two of these stations also measure additional physico-chemical variables (turbidity, temperature, conductivity) and water samples are collected automatically during floods, allowing further geochemical characterization of water and suspended solids. Two experimental plots monitor overland flow and erosion at 1 min time resolution on a hillslope with vineyard. A network of 11 sensors installed in the intermittent hydrographic network continuously measures water level and water temperature in headwater subcatchments (from 0.17 to 116 km2) at a time resolution of 2-5 min. A network of soil moisture sensors enables the continuous measurement of soil volumetric water content at 20 min time resolution at 9 sites. Additionally, concomitant observations (soil moisture measurements and stream gauging) were performed during floods between 2012 and 2014. Finally, this dataset is considered appropriate for understanding the rainfall variability in time and space at fine scales, improving areal rainfall estimations and progressing in distributed hydrological and erosion modelling. DOI of the referenced dataset: doi:10.6096/MISTRALS-HyMeX.1438.

  4. Couples at risk for transmission of alcoholism: protective influences.

    PubMed

    Bennett, L A; Wolin, S J; Reiss, D; Teitelbaum, M A

    1987-03-01

    A two-generation, sociocultural model of the transmission of alcoholism in families was operationalized and tested. Sixty-eight married children of alcoholic parents and their spouses were interviewed regarding dinner-time and holiday ritual practices in their families of origin, and heritage and ritual practices in the couples' current generation. Coders rated transcribed interviews along 14 theory-derived predictor variables, nine for the family of origin and five for the current nuclear family. Multiple regression analysis was applied in a two-step hierarchical method, with the dependent variable being transmission of alcoholism to the couple. The 14 predictor variables contributed significantly (p less than .01) to the couple's alcoholism outcome. A general theme of selective disengagement and reengagement for couples in families at risk for alcoholism recurrence is discussed.

  5. Adherence to Insulin Pump Behaviors in Young Children With Type 1 Diabetes Mellitus.

    PubMed

    Patton, Susana R; Driscoll, Kimberly A; Clements, Mark A

    2017-01-01

    Parents of young children are responsible for daily type 1 diabetes (T1DM) cares including insulin bolusing. For optimal insulin pump management, parents should enter a blood glucose result (SMBG) and a carbohydrate estimate (if food will be consumed) into the bolus advisor in their child's pump to assist in delivering the recommended insulin bolus. Previously, pump adherence behaviors were described in adolescents; we describe these behaviors in a sample of young children. Pump data covering between 14-30 consecutive days were obtained for 116 children. Assessed adherence to essential pump adherence behaviors (eg, SMBG, carbohydrate entry, and insulin use) and adherence to 3 Wizard/Bolus Advisor steps: SMBG-carbohydrate entry-insulin bolus delivered. Parents completed SMBG ≥4 times on 99% of days, bolused insulin ≥3 times on 95% of days, and entered carbohydrates ≥3 times on 93% of days, but they corrected for hyperglycemia (≥250 mg/dl or 13.9 mmol/l) only 63% of the time. Parents completed Wizard/Bolus Advisor steps (SMBG, carbohydrate entry, insulin bolus) within 30 minutes for 43% of boluses. Inverse correlations were found between children's mean daily glucose and the percentage of days with ≥4 SMBG and ≥3 carbohydrate entries as well as the percentage of boluses where all Wizard/Bolus Advisor steps were completed. Parents of young children adhered to individual pump behaviors, but showed some variability in their adherence to Wizard/Bolus Advisor steps. Parents showed low adherence to recommendations to correct for hyperglycemia. Like adolescents, targeting pump behaviors in young children may have the potential to optimize glycemic control.

  6. Simulation and experimental design of a new advanced variable step size Incremental Conductance MPPT algorithm for PV systems.

    PubMed

    Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir

    2016-05-01

    Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Sensitivity Equation Derivation for Transient Heat Transfer Problems

    NASA Technical Reports Server (NTRS)

    Hou, Gene; Chien, Ta-Cheng; Sheen, Jeenson

    2004-01-01

    The focus of the paper is on the derivation of sensitivity equations for transient heat transfer problems modeled by different discretization processes. Two examples will be used in this study to facilitate the discussion. The first example is a coupled, transient heat transfer problem that simulates the press molding process in fabrication of composite laminates. These state equations are discretized into standard h-version finite elements and solved by a multiple step, predictor-corrector scheme. The sensitivity analysis results based upon the direct and adjoint variable approaches will be presented. The second example is a nonlinear transient heat transfer problem solved by a p-version time-discontinuous Galerkin's Method. The resulting matrix equation of the state equation is simply in the form of Ax = b, representing a single step, time marching scheme. A direct differentiation approach will be used to compute the thermal sensitivities of a sample 2D problem.

  8. Measure Guideline. Replacing Single-Speed Pool Pumps with Variable Speed Pumps for Energy Savings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, A.; Easley, S.

    2012-05-01

    This measure guideline evaluates potential energy savings by replacing traditional single-speed pool pumps with variable speed pool pumps, and provides a basic cost comparison between continued uses of traditional pumps verses new pumps. A simple step-by-step process for inspecting the pool area and installing a new pool pump follows.

  9. Measure Guideline: Replacing Single-Speed Pool Pumps with Variable Speed Pumps for Energy Savings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, A.; Easley, S.

    2012-05-01

    The report evaluates potential energy savings by replacing traditional single-speed pool pumps with variable speed pool pumps, and provide a basic cost comparison between continued uses of traditional pumps verses new pumps. A simple step-by-step process for inspecting the pool area and installing a new pool pump follows.

  10. Synthesis, characterisation and analytical application of Fe₃O₄@SiO₂@polyaminoquinoline magnetic nanocomposite for the extraction and pre-concentration of Cd(II) and Pb(II) in food samples.

    PubMed

    Manoochehri, Mahboobeh; Asgharinezhad, Ali Akbar; Shekari, Nafiseh

    2015-01-01

    This work describes a novel Fe₃O₄@SiO₂@polyaminoquinoline magnetic nanocomposite and its application in the pre-concentration of Cd(II) and Pb(II) ions. The parameters affecting the pre-concentration procedure were optimised by a Box-Behnken design through response surface methodology. Three variables (extraction time, magnetic sorbent amount and pH) were selected as the main factors affecting the sorption step, while four variables (type, volume and concentration of the eluent, and elution time) were selected as main factors in the optimisation study of the elution step. Following the sorption and elution of analytes, the ions were quantified by flame atomic absorption spectrometry (FASS). The limits of detection were 0.1 and 0.7 ng ml(-1) for Cd(II) and Pb(II) ions, respectively. All the relative standard deviations were less than 7.6%. The sorption capacities of this new sorbent were 57 mg g(-)(1) for Cd(II) and 73 mg g(-1) for Pb(II). Ultimately, this nanocomposite was successfully applied to the rapid extraction of trace quantities of these heavy metal ions from seafood and agricultural samples and satisfactory results were obtained.

  11. Development of a hardware-based AC microgrid for AC stability assessment

    NASA Astrophysics Data System (ADS)

    Swanson, Robert R.

    As more power electronic-based devices enable the development of high-bandwidth AC microgrids, the topic of microgrid power distribution stability has become of increased interest. Recently, researchers have proposed a relatively straightforward method to assess the stability of AC systems based upon the time-constants of sources, the net bus capacitance, and the rate limits of sources. In this research, a focus has been to develop a hardware test system to evaluate AC system stability. As a first step, a time domain model of a two converter microgrid was established in which a three phase inverter acts as a power source and an active rectifier serves as an adjustable constant power AC load. The constant power load can be utilized to create rapid power flow transients to the generating system. As a second step, the inverter and active rectifier were designed using a Smart Power Module IGBT for switching and an embedded microcontroller as a processor for algorithm implementation. The inverter and active rectifier were designed to operate simultaneously using a synchronization signal to ensure each respective local controller operates in a common reference frame. Finally, the physical system was created and initial testing performed to validate the hardware functionality as a variable amplitude and variable frequency AC system.

  12. Unified gas-kinetic scheme with multigrid convergence for rarefied flow study

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2017-09-01

    The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.

  13. Outlier detection for particle image velocimetry data using a locally estimated noise variance

    NASA Astrophysics Data System (ADS)

    Lee, Yong; Yang, Hua; Yin, ZhouPing

    2017-03-01

    This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.

  14. DNA strand displacement system running logic programs.

    PubMed

    Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr

    2014-01-01

    The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Job Design and Ethnic Differences in Working Women’s Physical Activity

    PubMed Central

    Grzywacz, Joseph G.; Crain, A. Lauren; Martinson, Brian C.; Quandt, Sara A.

    2014-01-01

    Objective To document the role job control and schedule control play in shaping women’s physical activity, and how it delineates educational and racial variability in associations of job and social control with physical activity. Methods Prospective data were obtained from a community-based sample of working women (N = 302). Validated instruments measured job control and schedule control. Steps per day were assessed using New Lifestyles 800 activity monitors. Results Greater job control predicted more steps per day, whereas greater schedule control predicted fewer steps. Small indirect associations between ethnicity and physical activity were observed among women with a trade school degree or less but not for women with a college degree. Conclusions Low job control created barriers to physical activity among working women with a trade school degree or less. Greater schedule control predicted less physical activity, suggesting women do not use time “created” by schedule flexibility for personal health enhancement. PMID:24034681

  16. Job design and ethnic differences in working women's physical activity.

    PubMed

    Grzywacz, Joseph G; Crain, A Lauren; Martinson, Brian C; Quandt, Sara A

    2014-01-01

    To document the role job control and schedule control play in shaping women's physical activity, and how it delineates educational and racial variability in associations of job and social control with physical activity. Prospective data were obtained from a community-based sample of working women (N = 302). Validated instruments measured job control and schedule control. Steps per day were assessed using New Lifestyles 800 activity monitors. Greater job control predicted more steps per day, whereas greater schedule control predicted fewer steps. Small indirect associations between ethnicity and physical activity were observed among women with a trade school degree or less but not for women with a college degree. Low job control created barriers to physical activity among working women with a trade school degree or less. Greater schedule control predicted less physical activity, suggesting women do not use time "created" by schedule flexibility for personal health enhancement.

  17. Design of polynomial fuzzy observer-controller for nonlinear systems with state delay: sum of squares approach

    NASA Astrophysics Data System (ADS)

    Gassara, H.; El Hajjaji, A.; Chaabane, M.

    2017-07-01

    This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.

  18. Rapid Communication: Large exploitable genetic variability exists to shorten age at slaughter in cattle.

    PubMed

    Berry, D P; Cromie, A R; Judge, M M

    2017-10-01

    Apprehension among consumers is mounting on the efficiency by which cattle convert feedstuffs into human edible protein and energy as well as the consequential effects on the environment. Most (genetic) studies that attempt to address these issues have generally focused on efficiency metrics defined over a certain time period of an animal's life cycle, predominantly the period representing the linear phase of growth. The age at which an animal reaches the carcass specifications for slaughter, however, is also known to vary between breeds; less is known on the extent of the within-breed variability in age at slaughter. Therefore, the objective of the present study was to quantify the phenotypic and genetic variability in the age at which cattle reach a predefined carcass weight and subcutaneous fat cover. A novel trait, labeled here as the deviation in age at slaughter (DAGE), was represented by the unexplained variability from a statistical model, with age at slaughter as the dependent variable and with the fixed effects, among others, of carcass weight and fat score (scale 1 to 15 scored by video image analysis of the carcass at slaughter). Variance components for DAGE were estimated using either a 2-step approach (i.e., the DAGE phenotype derived first and then variance components estimated) or a 1-step approach (i.e., variance components for age at slaughter estimated directly in a mixed model that included the fixed effects of, among others, carcass weight and carcass fat score as well as a random direct additive genetic effect). The raw phenotypic SD in DAGE was 44.2 d. The genetic SD and heritability for DAGE estimated using the 1-step or 2-step models varied from 14.2 to 15.1 d and from 0.23 to 0.26 (SE 0.02), respectively. Assuming the (genetic) variability in the number of days from birth to reaching a desired carcass specifications can be exploited without any associated unfavorable repercussions, considerable potential exists to improve not only the (feed) efficiency of the animal and farm system but also the environmental footprint of the system. The beauty of the approach proposed, relative to strategies that select directly for the feed intake complex and enteric methane emissions, is that data on age at slaughter are generally readily available. Of course, faster gains may potentially be achieved if a dual objective of improving animal efficiency per day coupled with reduced days to slaughter was embarked on.

  19. Physical Activity Patterns in University Students: Do They Follow the Public Health Guidelines?

    PubMed Central

    Clemente, Filipe Manuel; Nikolaidis, Pantelis Theodoros; Martins, Fernando Manuel Lourenço; Mendes, Rui Sousa

    2016-01-01

    Physical activity is associated with health. The aim of this study was (a) to access if Portuguese university students meet the public health recommendations for physical activity and (b) the effect of gender and day of the week on daily PA levels of university students. This observational cross-sectional study involved 126 (73 women) healthy Portuguese university students aged 18–23 years old. Participants wore the ActiGraph wGT3X-BT accelerometer for seven consecutive days. Number of steps, time spent sedentary and in light, moderate and vigorous physical activity were recorded. The two-way MANOVA revealed that gender (p-value = 0.001; η2 = 0.038; minimum effect) and day of the week (p-value = 0.001; η2 = 0.174; minimum effect) had significant main effects on the physical activity variables. It was shown that during weekdays, male students walked more steps (65.14%), spent less time sedentary (6.77%) and in light activities (3.11%) and spent more time in moderate (136.67%) and vigorous activity (171.29%) in comparison with weekend days (p < 0.05). The descriptive analysis revealed that female students walked more steps (51.18%) and spent more time in moderate (125.70%) and vigorous (124.16%) activities during weekdays than in weekend days (p < 0.05). Women students did not achieve the recommended 10,000 steps/day on average during weekdays and weekend days. Only male students achieved this recommendation during weekdays. In summary, this study showed a high incidence of sedentary time in university students, mainly on weekend days. New strategies must be adopted to promote physical activity in this population, focusing on the change of sedentary behaviour. PMID:27022993

  20. Flow and residence times of dynamic river bank storage and sinuosity-driven hyporheic exchange

    USGS Publications Warehouse

    Gomez-Velez, J.D.; Wilson, J.L.; Cardenas, M.B.; Harvey, Judson

    2017-01-01

    Hydrologic exchange fluxes (HEFs) vary significantly along river corridors due to spatiotemporal changes in discharge and geomorphology. This variability results in the emergence of biogeochemical hot-spots and hot-moments that ultimately control solute and energy transport and ecosystem services from the local to the watershed scales. In this work, we use a reduced-order model to gain mechanistic understanding of river bank storage and sinuosity-driven hyporheic exchange induced by transient river discharge. This is the first time that a systematic analysis of both processes is presented and serves as an initial step to propose parsimonious, physics-based models for better predictions of water quality at the large watershed scale. The effects of channel sinuosity, alluvial valley slope, hydraulic conductivity, and river stage forcing intensity and duration are encapsulated in dimensionless variables that can be easily estimated or constrained. We find that the importance of perturbations in the hyporheic zone's flux, residence times, and geometry is mainly explained by two-dimensionless variables representing the ratio of the hydraulic time constant of the aquifer and the duration of the event (Γd) and the importance of the ambient groundwater flow ( ). Our model additionally shows that even systems with small sensitivity, resulting in small changes in the hyporheic zone extent, are characterized by highly variable exchange fluxes and residence times. These findings highlight the importance of including dynamic changes in hyporheic zones for typical HEF models such as the transient storage model.

  1. Age-related modifications in steering behaviour: effects of base-of-support constraints at the turn point.

    PubMed

    Paquette, Maxime R; Fuller, Jason R; Adkin, Allan L; Vallis, Lori Ann

    2008-09-01

    This study investigated the effects of altering the base of support (BOS) at the turn point on anticipatory locomotor adjustments during voluntary changes in travel direction in healthy young and older adults. Participants were required to walk at their preferred pace along a 3-m straight travel path and continue to walk straight ahead or turn 40 degrees to the left or right for an additional 2-m. The starting foot and occasionally the gait starting point were adjusted so that participants had to execute the turn using a cross-over step with a narrow BOS or a lead-out step with a wide BOS. Spatial and temporal gait variables, magnitudes of angular segmental movement, and timing and sequencing of body segment reorientation were similar despite executing the turn with a narrow or wide BOS. A narrow BOS during turning generated an increased step width in the step prior to the turn for both young and older adults. Age-related changes when turning included reduced step velocity and step length for older compared to young adults. Age-related changes in the timing and sequencing of body segment reorientation prior to the turn point were also observed. A reduction in walking speed and an increase in step width just prior to the turn, combined with a delay in motion of the center of mass suggests that older adults used a more cautious combined foot placement and hip strategy to execute changes in travel direction compared to young adults. The results of this study provide insight into mobility constraints during a common locomotor task in older adults.

  2. Association between the gait pattern characteristics of older people and their two-step test scores.

    PubMed

    Kobayashi, Yoshiyuki; Ogata, Toru

    2018-04-27

    The Two-Step test is one of three official tests authorized by the Japanese Orthopedic Association to evaluate the risk of locomotive syndrome (a condition of reduced mobility caused by an impairment of the locomotive organs). It has been reported that the Two-Step test score has a good correlation with one's walking ability; however, its association with the gait pattern of older people during normal walking is still unknown. Therefore, this study aims to clarify the associations between the gait patterns of older people observed during normal walking and their Two-Step test scores. We analyzed the whole waveforms obtained from the lower-extremity joint angles and joint moments of 26 older people in various stages of locomotive syndrome using principal component analysis (PCA). The PCA was conducted using a 260 × 2424 input matrix constructed from the participants' time-normalized pelvic and right-lower-limb-joint angles along three axes (ten trials of 26 participants, 101 time points, 4 angles, 3 axes, and 2 variable types per trial). The Pearson product-moment correlation coefficient between the scores of the principal component vectors (PCVs) and the scores of the Two-Step test revealed that only one PCV (PCV 2) among the 61 obtained relevant PCVs is significantly related to the score of the Two-Step test. We therefore concluded that the joint angles and joint moments related to PCV 2-ankle plantar-flexion, ankle plantar-flexor moments during the late stance phase, ranges of motion and moments on the hip, knee, and ankle joints in the sagittal plane during the entire stance phase-are the motions associated with the Two-Step test.

  3. Variability of microchip capillary electrophoresis with conductivity detection.

    PubMed

    Tantra, Ratna; Robinson, Kenneth; Sikora, Aneta

    2014-02-01

    Microfluidic CE with conductivity detection platforms could have an impact on the future development of smaller, faster and portable devices. However, for the purpose of reliable identification and quantification, there is a need to understand the degree of irreproducibility associated with the analytical technique. In this study, a protocol was developed to remove baseline drift problems sometimes observed in such devices. The protocol, which consisted of pre-conditioning steps prior to analysis, was used to further assess measurement variability from 24 individual microchips fabricated from six separate batches of glass substrate. Results show acceptable RSD percentage for retention time measurements but large variability in their corresponding peak areas (with some microchips having variability of ∼50%). Sources of variability were not related to substrate batch but possibly to a number of factors such as applied voltage fluctuations or variations in microchannel quality, for example surface roughness that will subsequently affect microchannel dimensions. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Identification of the period of stability in a balance test after stepping up using a simplified cumulative sum.

    PubMed

    Safieddine, Doha; Chkeir, Aly; Herlem, Cyrille; Bera, Delphine; Collart, Michèle; Novella, Jean-Luc; Dramé, Moustapha; Hewson, David J; Duchêne, Jacques

    2017-11-01

    Falls are a major cause of death in older people. One method used to predict falls is analysis of Centre of Pressure (CoP) displacement, which provides a measure of balance quality. The Balance Quality Tester (BQT) is a device based on a commercial bathroom scale that calculates instantaneous values of vertical ground reaction force (Fz) as well as the CoP in both anteroposterior (AP) and mediolateral (ML) directions. The entire testing process needs to take no longer than 12 s to ensure subject compliance, making it vital that calculations related to balance are only calculated for the period when the subject is static. In the present study, a method is presented to detect the stabilization period after a subject has stepped onto the BQT. Four different phases of the test are identified (stepping-on, stabilization, balancing, stepping-off), ensuring that subjects are static when parameters from the balancing phase are calculated. The method, based on a simplified cumulative sum (CUSUM) algorithm, could detect the change between unstable and stable stance. The time taken to stabilize significantly affected the static balance variables of surface area and trajectory velocity, and was also related to Timed-up-and-Go performance. Such a finding suggests that the time to stabilize could be a worthwhile parameter to explore as a potential indicator of balance problems and fall risk in older people. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Reliability of fitness tests using methods and time periods common in sport and occupational management.

    PubMed

    Burnstein, Bryan D; Steele, Russell J; Shrier, Ian

    2011-01-01

    Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Cohort study. Eighteen different Cirque du Soleil shows. Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators.

  6. Variable speed wind turbine control by discrete-time sliding mode approach.

    PubMed

    Torchani, Borhen; Sellami, Anis; Garcia, Germain

    2016-05-01

    The aim of this paper is to propose a new design variable speed wind turbine control by discrete-time sliding mode approach. This methodology is designed for linear saturated system. The saturation constraint is reported on inputs vector. To this end, the back stepping design procedure is followed to construct a suitable sliding manifold that guarantees the attainment of a stabilization control objective. It is well known that the mechanisms are investigated in term of the most proposed assumptions to deal with the damping, shaft stiffness and inertia effect of the gear. The objectives are to synthesize robust controllers that maximize the energy extracted from wind, while reducing mechanical loads and rotor speed tracking combined with an electromagnetic torque. Simulation results of the proposed scheme are presented. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finn, John M., E-mail: finn@lanl.gov

    2015-03-15

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less

  8. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    DOE PAGES

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; ...

    2017-02-03

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associatedmore » with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step  convergence is applicable.« less

  9. The role of cool-flame chemistry in quasi-steady combustion and extinction of n-heptane droplets

    NASA Astrophysics Data System (ADS)

    Paczko, Guenter; Peters, Norbert; Seshadri, Kalyanasundaram; Williams, Forman Arthur

    2014-09-01

    Experiments on the combustion of large n-heptane droplets, performed by the National Aeronautics and Space Administration in the International Space Station, revealed a second stage of continued quasi-steady burning, supported by low-temperature chemistry, that follows radiative extinction of the first stage of burning, which is supported by normal hot-flame chemistry. The second stage of combustion experienced diffusive extinction, after which a large vapour cloud was observed to form around the droplet. In the present work, a 770-step reduced chemical-kinetic mechanism and a new 62-step skeletal chemical-kinetic mechanism, developed as an extension of an earlier 56-step mechanism, are employed to calculate the droplet burning rates, flame structures, and extinction diameters for this cool-flame regime. The calculations are performed for quasi-steady burning with the mixture fraction as the independent variable, which is then related to the physical variables of droplet combustion. The predictions with the new mechanism, which agree well with measured autoignition times, reveal that, in decreasing order of abundance, H2O, CO, H2O2, CH2O, and C2H4 are the principal reaction products during the low-temperature stage and that, during this stage, there is substantial leakage of n-heptane and O2 through the flame, and very little production of CO2 with no soot in the mechanism. The fuel leakage has been suggested to be the source of the observed vapour cloud that forms after flame extinction. While the new skeletal chemical-kinetic mechanism facilitates understanding of the chemical kinetics and predicts ignition times well, its predicted droplet diameters at extinction are appreciably larger than observed experimentally, but predictions with the 770-step reduced chemical-kinetic mechanism are in reasonably good agreement with experiment. The computations show how the key ketohydroperoxide compounds control the diffusion-flame structure and its extinction.

  10. Characterizing a Century of Climate and Hydrological Variability of a Mediterranean and Mountainous Watersheds: the Durance River Case-Study

    NASA Astrophysics Data System (ADS)

    Mathevet, T.; Kuentz, A.; Gailhard, J.; Andreassian, V.

    2013-12-01

    Improving the understanding of mountain watersheds hydrological variability is a great scientific issue, for both researchers and water resources managers, such as Electricite de France (Energy and Hydropower Company). The past and current context of climate variability enhances the interest on this topic, since multi-purposes water resources management is highly sensitive to this variability. The Durance River watershed (14000 km2), situated in the French Alps, is a good example of the complexity of this issue. It is characterized by a variety of hydrological processes (from snowy to Mediterranean regimes) and a wide range of anthropogenic influences (hydropower, irrigation, flood control, tourism and water supply), mixing potential causes of changes in its hydrological regimes. As water related stakes are numerous in this watershed, improving knowledge on the hydrological variability of the Durance River appears to be essential. In this presentation, we would like to focus on a methodology we developed to build long-term historical hydrometeorological time-series, based on atmospheric reanalysis (20CR : 20th Century Reanalysis) and historical local observations. This methodology allowed us to generate precipitation, air temperature and streamflow time-series at a daily time-step for a sample of 22 watersheds, for the 1883-2010 period. These long-term streamflow reconstructions have been validated thanks to historical searches that allowed to bring to light ten long historical series of daily streamflows, beginning on the early 20th century. Reconstructions appear to have rather good statistical properties, with good correlation (greater than 0.8) and limited mean and variance bias (less than 5%). Then, these long-term hydrometeorological time-series allowed us to characterize the past variability in terms of available water resources, droughts or hydrological regime. These analyses help water resources managers to better know the range of hydrological variabilities, which are usually greatly underestimated with classical available time-series (less than 50 years).

  11. Time trends in leisure time physical activity and physical fitness in the elderly: five-year follow-up of the Spanish National Health Survey (2006-2011).

    PubMed

    Casado-Pérez, Carmen; Hernández-Barrera, Valentín; Jiménez-García, Rodrigo; Fernández-de-las-Peñas, Cesar; Carrasco-Garrido, Pilar; López-de-Andrés, Ana; Jimenez-Trujillo, Ma Isabel; Palacios-Ceña, Domingo

    2015-04-01

    To estimate the trends in the practice of leisure time physical activity, walking up 10 steps, and walking for 1h, during the years 2006-2011, in elderly Spanish people. Observational study, retrospective analysis of Spanish National Health Surveys. We analysed data collected from the Spanish National Health Surveys conducted in 2006 (n=30,072) and 2011 (n=21,007), through self-reported information. The number of subjects aged ≥65 years included in the current study was n=5756 in 2006 (19.14%) and n=4617 in 2011 (21.97%). We included responses from adults aged 65 years and older. The main variables included leisure-time physical activity, walking up 10 steps, and walking for 1h. We analysed socio-demographic characteristics, individuals' self-rated health status, lifestyle habits, co-morbid conditions and disability using multivariable logistic regression models. The total number of subjects was 10,373 (6076 women, 4297 men). The probability of self-reported capacity was significantly higher in 2006 than in 2011 for leisure-time physical activity, walking up 10 steps, and walking for 1h for both sexes (women: OR 2.20, 95%IC 1.91-5.55; OR 2.50, 95%IC 1.99-3.14; OR 1.04, 95%IC 1.01-1.07; men: OR 2.20, 95%IC 1.91-2.55; OR 2.01, 95%IC 1.40-2.89; OR 1.05, 95%IC 1.0-1.1) respectively. Both sexes were associated with a significantly lower probability of performing leisure-time physical activity, walking up 10 steps, and walking for 1h. Additionally, those over 80 years of age, on average, showed a poor or very poor perception of their health and presented with some type of disability. A decrease in the proportion of respondents who self-reported undertaking leisure-time physical activity, walking up 10 steps, and walking for 1h was observed in the Spanish population of over 65 years between 2006 and 2011. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. A high-order positivity-preserving single-stage single-step method for the ideal magnetohydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Christlieb, Andrew J.; Feng, Xiao; Seal, David C.; Tang, Qi

    2016-07-01

    We propose a high-order finite difference weighted ENO (WENO) method for the ideal magnetohydrodynamics (MHD) equations. The proposed method is single-stage (i.e., it has no internal stages to store), single-step (i.e., it has no time history that needs to be stored), maintains a discrete divergence-free condition on the magnetic field, and has the capacity to preserve the positivity of the density and pressure. To accomplish this, we use a Taylor discretization of the Picard integral formulation (PIF) of the finite difference WENO method proposed in Christlieb et al. (2015) [23], where the focus is on a high-order discretization of the fluxes (as opposed to the conserved variables). We use the version where fluxes are expanded to third-order accuracy in time, and for the fluid variables space is discretized using the classical fifth-order finite difference WENO discretization. We use constrained transport in order to obtain divergence-free magnetic fields, which means that we simultaneously evolve the magnetohydrodynamic (that has an evolution equation for the magnetic field) and magnetic potential equations alongside each other, and set the magnetic field to be the (discrete) curl of the magnetic potential after each time step. In this work, we compute these derivatives to fourth-order accuracy. In order to retain a single-stage, single-step method, we develop a novel Lax-Wendroff discretization for the evolution of the magnetic potential, where we start with technology used for Hamilton-Jacobi equations in order to construct a non-oscillatory magnetic field. The end result is an algorithm that is similar to our previous work Christlieb et al. (2014) [8], but this time the time stepping is replaced through a Taylor method with the addition of a positivity-preserving limiter. Finally, positivity preservation is realized by introducing a parameterized flux limiter that considers a linear combination of high and low-order numerical fluxes. The choice of the free parameter is then given in such a way that the fluxes are limited towards the low-order solver until positivity is attained. Given the lack of additional degrees of freedom in the system, this positivity limiter lacks energy conservation where the limiter turns on. However, this ingredient can be dropped for problems where the pressure does not become negative. We present two and three dimensional numerical results for several standard test problems including a smooth Alfvén wave (to verify formal order of accuracy), shock tube problems (to test the shock-capturing ability of the scheme), Orszag-Tang, and cloud shock interactions. These results assert the robustness and verify the high-order of accuracy of the proposed scheme.

  13. GALA: group analysis leads to accuracy, a novel approach for solving the inverse problem in exploratory analysis of group MEG recordings

    PubMed Central

    Kozunov, Vladimir V.; Ossadtchi, Alexei

    2015-01-01

    Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis. We propose Group Analysis Leads to Accuracy (GALA)—a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects. A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face-specific evoked responses. PMID:25954141

  14. Associations between the Objectively Measured Office Environment and Workplace Step Count and Sitting Time: Cross-Sectional Analyses from the Active Buildings Study.

    PubMed

    Fisher, Abi; Ucci, Marcella; Smith, Lee; Sawyer, Alexia; Spinney, Richard; Konstantatou, Marina; Marmot, Alexi

    2018-06-01

    Office-based workers spend a large proportion of the day sitting and tend to have low overall activity levels. Despite some evidence that features of the external physical environment are associated with physical activity, little is known about the influence of the spatial layout of the internal environment on movement, and the majority of data use self-report. This study investigated associations between objectively-measured sitting time and activity levels and the spatial layout of office floors in a sample of UK office-based workers. Participants wore activPAL accelerometers for at least three consecutive workdays. Primary outcomes were steps and proportion of sitting time per working hour. Primary exposures were office spatial layout, which was objectively-measured by deriving key spatial variables: 'distance from each workstation to key office destinations', 'distance from participant's workstation to all other workstations', 'visibility of co-workers', and workstation 'closeness'. 131 participants from 10 organisations were included. Fifty-four per cent were female, 81% were white, and the majority had a managerial or professional role (72%) in their organisation. The average proportion of the working hour spent sitting was 0.7 (SD 0.15); participants took on average 444 (SD 210) steps per working hour. Models adjusted for confounders revealed significant negative associations between step count and distance from each workstation to all other office destinations (e.g., B = -4.66, 95% CI: -8.12, -1.12, p < 0.01) and nearest office destinations (e.g., B = -6.45, 95% CI: -11.88, -0.41, p < 0.05) and visibility of workstations when standing (B = -2.35, 95% CI: -3.53, -1.18, p < 0.001). The magnitude of these associations was small. There were no associations between spatial variables and sitting time per work hour. Contrary to our hypothesis, the further participants were from office destinations the less they walked, suggesting that changing the relative distance between workstations and other destinations on the same floor may not be the most fruitful target for promoting walking and reducing sitting in the workplace. However, reported effect sizes were very small and based on cross-sectional analyses. The approaches developed in this study could be applied to other office buildings to establish whether a specific office typology may yield more promising results.

  15. Modeling multivariate time series on manifolds with skew radial basis functions.

    PubMed

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  16. Should we consider steps with variable height for a safer stair negotiation in older adults?

    PubMed

    Kunzler, Marcos R; da Rocha, Emmanuel S; Dos Santos, Christielen S; Ceccon, Fernando G; Priario, Liver A; Carpes, Felipe P

    2018-01-01

    Effects of exercise on foot clearances are important. In older adults variations in foot clearances during walking may lead to a fall, but there is a lack of information concerning stair negotiation in older adults. Whether a condition of post exercise changes foot clearances between steps of a staircase in older adults still unknown. To determine differences in clearances when older adults negotiate different steps of a staircase before and after a session of aerobic exercise. Kinematics data from 30 older adults were acquired and the toe and heel clearances were determined for each step. Clearances were compared between the steps. Smaller clearances were found at the highest step during ascending and descending, which was not changed by exercise. Smaller clearances suggest higher risk of tripping at the top of the staircase, regardless of exercise. A smaller step at the top of a short flight of stairs could reduce chances of tripping in older adults. It suggests that steps with variable height could make stair negotiation safer in older adults. This hypothesis should be tested in further studies.

  17. Generalized semiparametric varying-coefficient models for longitudinal data

    NASA Astrophysics Data System (ADS)

    Qi, Li

    In this dissertation, we investigate the generalized semiparametric varying-coefficient models for longitudinal data that can flexibly model three types of covariate effects: time-constant effects, time-varying effects, and covariate-varying effects, i.e., the covariate effects that depend on other possibly time-dependent exposure variables. First, we consider the model that assumes the time-varying effects are unspecified functions of time while the covariate-varying effects are parametric functions of an exposure variable specified up to a finite number of unknown parameters. The estimation procedures are developed using multivariate local linear smoothing and generalized weighted least squares estimation techniques. The asymptotic properties of the proposed estimators are established. The simulation studies show that the proposed methods have satisfactory finite sample performance. ACTG 244 clinical trial of HIV infected patients are applied to examine the effects of antiretroviral treatment switching before and after HIV developing the 215-mutation. Our analysis shows benefit of treatment switching before developing the 215-mutation. The proposed methods are also applied to the STEP study with MITT cases showing that they have broad applications in medical research.

  18. Importance of the cutoff value in the quadratic adaptive integrate-and-fire model.

    PubMed

    Touboul, Jonathan

    2009-08-01

    The quadratic adaptive integrate-and-fire model (Izhikevich, 2003 , 2007 ) is able to reproduce various firing patterns of cortical neurons and is widely used in large-scale simulations of neural networks. This model describes the dynamics of the membrane potential by a differential equation that is quadratic in the voltage, coupled to a second equation for adaptation. Integration is stopped during the rise phase of a spike at a voltage cutoff value V(c) or when it blows up. Subsequently the membrane potential is reset, and the adaptation variable is increased by a fixed amount. We show in this note that in the absence of a cutoff value, not only the voltage but also the adaptation variable diverges in finite time during spike generation in the quadratic model. The divergence of the adaptation variable makes the system very sensitive to the cutoff: changing V(c) can dramatically alter the spike patterns. Furthermore, from a computational viewpoint, the divergence of the adaptation variable implies that the time steps for numerical simulation need to be small and adaptive. However, divergence of the adaptation variable does not occur for the quartic model (Touboul, 2008 ) and the adaptive exponential integrate-and-fire model (Brette & Gerstner, 2005 ). Hence, these models are robust to changes in the cutoff value.

  19. Physical Activity Patterns and Sedentary Behavior in Older Women With Urinary Incontinence: an Accelerometer-based Study.

    PubMed

    Chu, Christine M; Khanijow, Kavita D; Schmitz, Kathryn H; Newman, Diane K; Arya, Lily A; Harvie, Heidi S

    2018-01-10

    Objective physical activity data for women with urinary incontinence are lacking. We investigated the relationship between physical activity, sedentary behavior, and the severity of urinary symptoms in older community-dwelling women with urinary incontinence using accelerometers. This is a secondary analysis of a study that measured physical activity (step count, moderate-to-vigorous physical activity time) and sedentary behavior (percentage of sedentary time, number of sedentary bouts per day) using a triaxial accelerometer in older community-dwelling adult women not actively seeking treatment of their urinary symptoms. The relationship between urinary symptoms and physical activity variables was measured using linear regression. Our cohort of 35 community-dwelling women (median, age, 71 years) demonstrated low physical activity (median daily step count, 2168; range, 687-5205) and high sedentary behavior (median percentage of sedentary time, 74%; range, 54%-89%). Low step count was significantly associated with nocturia (P = 0.02). Shorter duration of moderate-to-vigorous physical activity time was significantly associated with nocturia (P = 0.001), nocturnal enuresis (P = 0.04), and greater use of incontinence products (P = 0.04). Greater percentage of time spent in sedentary behavior was also significantly associated with nocturia (P = 0.016). Low levels of physical activity are associated with greater nocturia and nocturnal enuresis. Sedentary behavior is a new construct that may be associated with lower urinary tract symptoms. Physical activity and sedentary behavior represent potential new targets for treating nocturnal urinary tract symptoms.

  20. One-step generation of continuous-variable quadripartite cluster states in a circuit QED system

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-peng; Li, Zhen; Ma, Sheng-li; Li, Fu-li

    2017-07-01

    We propose a dissipative scheme for one-step generation of continuous-variable quadripartite cluster states in a circuit QED setup consisting of four superconducting coplanar waveguide resonators and a gap-tunable superconducting flux qubit. With external driving fields to adjust the desired qubit-resonator and resonator-resonator interactions, we show that continuous-variable quadripartite cluster states of the four resonators can be generated with the assistance of energy relaxation of the qubit. By comparison with the previous proposals, the distinct advantage of our scheme is that only one step of quantum operation is needed to realize the quantum state engineering. This makes our scheme simpler and more feasible in experiment. Our result may have useful application for implementing quantum computation in solid-state circuit QED systems.

  1. An easy-to-use calculating machine to simulate steady state and non-steady-state preparative separations by multiple dual mode counter-current chromatography with semi-continuous loading of feed mixtures.

    PubMed

    Kostanyan, Artak E; Shishilov, Oleg N

    2018-06-01

    Multiple dual mode counter-current chromatography (MDM CCC) separation processes with semi-continuous large sample loading consist of a succession of two counter-current steps: with "x" phase (first step) and "y" phase (second step) flow periods. A feed mixture dissolved in the "x" phase is continuously loaded into a CCC machine at the beginning of the first step of each cycle over a constant time with the volumetric rate equal to the flow rate of the pure "x" phase. An easy-to-use calculating machine is developed to simulate the chromatograms and the amounts of solutes eluted with the phases at each cycle for steady-state (the duration of the flow periods of the phases is kept constant for all the cycles) and non-steady-state (with variable duration of alternating phase elution steps) separations. Using the calculating machine, the separation of mixtures containing up to five components can be simulated and designed. Examples of the application of the calculating machine for the simulation of MDM CCC processes are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    PubMed

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  3. Facility Composer Design Wizards: A Method for Extensible Codified Design Logic Based on Explicit Facility Criteria

    DTIC Science & Technology

    2004-11-01

    institutionalized approaches to solving problems, company/client specific mission priorities (for example, State Department vs . Army Reserve and... independent variables that let the user leave a particular step before fin- ishing all the items, and to return at a later time without any data loss. One...Sales, Main Exchange, Miscellane- ous Shops, Post Office, Restaurant , and Theater.) Authorized customers served 04 Other criteria pro- vided by the

  4. Physical Activity in Hemodialysis Patients Measured by Triaxial Accelerometer

    PubMed Central

    Gomes, Edimar Pedrosa; Reboredo, Maycon Moura; Carvalho, Erich Vidal; Teixeira, Daniel Rodrigues; Carvalho, Laís Fernanda Caldi d'Ornellas; Filho, Gilberto Francisco Ferreira; de Oliveira, Julio César Abreu; Sanders-Pinheiro, Helady; Chebli, Júlio Maria Fonseca; de Paula, Rogério Baumgratz; Pinheiro, Bruno do Valle

    2015-01-01

    Different factors can contribute to a sedentary lifestyle among hemodialysis (HD) patients, including the period they spend on dialysis. The aim of this study was to evaluate characteristics of physical activities in daily life in this population by using an accurate triaxial accelerometer and to correlate these characteristics with physiological variables. Nineteen HD patients were evaluated using the DynaPort accelerometer and compared to nineteen control individuals, regarding the time spent in different activities and positions of daily life and the number of steps taken. HD patients were more sedentary than control individuals, spending less time walking or standing and spending more time lying down. The sedentary behavior was more pronounced on dialysis days. According to the number of steps taken per day, 47.4% of hemodialysis patients were classified as sedentary against 10.5% in control group. Hemoglobin level, lower extremity muscle strength, and physical functioning of SF-36 questionnaire correlated significantly with the walking time and active time. Looking accurately at the patterns of activity in daily life, HDs patients are more sedentary, especially on dialysis days. These patients should be motivated to enhance the physical activity. PMID:26090432

  5. DYCAST: A finite element program for the crash analysis of structures

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Winter, R.; Ogilvie, P.

    1987-01-01

    DYCAST is a nonlinear structural dynamic finite element computer code developed for crash simulation. The element library contains stringers, beams, membrane skin triangles, plate bending triangles and spring elements. Changing stiffnesses in the structure are accounted for by plasticity and very large deflections. Material nonlinearities are accommodated by one of three options: elastic-perfectly plastic, elastic-linear hardening plastic, or elastic-nonlinear hardening plastic of the Ramberg-Osgood type. Geometric nonlinearities are handled in an updated Lagrangian formulation by reforming the structure into its deformed shape after small time increments while accumulating deformations, strains, and forces. The nonlinearities due to combined loadings are maintained, and stiffness variation due to structural failures are computed. Numerical time integrators available are fixed-step central difference, modified Adams, Newmark-beta, and Wilson-theta. The last three have a variable time step capability, which is controlled internally by a solution convergence error measure. Other features include: multiple time-load history tables to subject the structure to time dependent loading; gravity loading; initial pitch, roll, yaw, and translation of the structural model with respect to the global system; a bandwidth optimizer as a pre-processor; and deformed plots and graphics as post-processors.

  6. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  7. Single-step method for β-galactosidase assays in Escherichia coli using a 96-well microplate reader.

    PubMed

    Schaefer, Jorrit; Jovanovic, Goran; Kotta-Loizou, Ioly; Buck, Martin

    2016-06-15

    Historically, the lacZ gene is one of the most universally used reporters of gene expression in molecular biology. Its activity can be quantified using an artificial substrate, o-nitrophenyl-ß-d-galactopyranoside (ONPG). However, the traditional method for measuring LacZ activity (first described by J. H. Miller in 1972) can be challenging for a large number of samples, is prone to variability, and involves hazardous compounds for lysis (e.g., chloroform, toluene). Here we describe a single-step assay using a 96-well microplate reader with a proven alternative cell permeabilization method. This modified protocol reduces handling time by 90%. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. [A focused sound field measurement system by LabVIEW].

    PubMed

    Jiang, Zhan; Bai, Jingfeng; Yu, Ying

    2014-05-01

    In this paper, according to the requirement of the focused sound field measurement, a focused sound field measurement system was established based on the LabVIEW virtual instrument platform. The system can automatically search the focus position of the sound field, and adjust the scanning path according to the size of the focal region. Three-dimensional sound field scanning time reduced from 888 hours in uniform step to 9.25 hours in variable step. The efficiency of the focused sound field measurement was improved. There is a certain deviation between measurement results and theoretical calculation results. Focal plane--6 dB width difference rate was 3.691%, the beam axis--6 dB length differences rate was 12.937%.

  9. Improvements of the particle-in-cell code EUTERPE for petascaling machines

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Kleiber, Ralf; Castejón, Francisco; Cela, José M.

    2011-09-01

    In the present work we report some performance measures and computational improvements recently carried out using the gyrokinetic code EUTERPE (Jost, 2000 [1] and Jost et al., 1999 [2]), which is based on the general particle-in-cell (PIC) method. The scalability of the code has been studied for up to sixty thousand processing elements and some steps towards a complete hybridization of the code were made. As a numerical example, non-linear simulations of Ion Temperature Gradient (ITG) instabilities have been carried out in screw-pinch geometry and the results are compared with earlier works. A parametric study of the influence of variables (step size of the time integrator, number of markers, grid size) on the quality of the simulation is presented.

  10. Terrain and refractivity effects on non-optical paths

    NASA Astrophysics Data System (ADS)

    Barrios, Amalia E.

    1994-07-01

    The split-step parabolic equation (SSPE) has been used extensively to model tropospheric propagation over the sea, but recent efforts have extended this method to propagation over arbitrary terrain. At the Naval Command, Control and Ocean Surveillance Center (NCCOSC), Research, Development, Test and Evaluation Division, a split-step Terrain Parabolic Equation Model (TPEM) has been developed that takes into account variable terrain and range-dependent refractivity profiles. While TPEM has been previously shown to compare favorably with measured data and other existing terrain models, two alternative methods to model radiowave propagation over terrain, implemented within TPEM, will be presented that give a two to ten-fold decrease in execution time. These two methods are also shown to agree well with measured data.

  11. Real time implementation and control validation of the wind energy conversion system

    NASA Astrophysics Data System (ADS)

    Sattar, Adnan

    The purpose of the thesis is to analyze dynamic and transient characteristics of wind energy conversion systems including the stability issues in real time environment using the Real Time Digital Simulator (RTDS). There are different power system simulation tools available in the market. Real time digital simulator (RTDS) is one of the powerful tools among those. RTDS simulator has a Graphical User Interface called RSCAD which contains detail component model library for both power system and control relevant analysis. The hardware is based upon the digital signal processors mounted in the racks. RTDS simulator has the advantage of interfacing the real world signals from the external devices, hence used to test the protection and control system equipments. Dynamic and transient characteristics of the fixed and variable speed wind turbine generating systems (WTGSs) are analyzed, in this thesis. Static Synchronous Compensator (STATCOM) as a flexible ac transmission system (FACTS) device is used to enhance the fault ride through (FRT) capability of the fixed speed wind farm. Two level voltage source converter based STATCOM is modeled in both VSC small time-step and VSC large time-step of RTDS. The simulation results of the RTDS model system are compared with the off-line EMTP software i.e. PSCAD/EMTDC. A new operational scheme for a MW class grid-connected variable speed wind turbine driven permanent magnet synchronous generator (VSWT-PMSG) is developed. VSWT-PMSG uses fully controlled frequency converters for the grid interfacing and thus have the ability to control the real and reactive powers simultaneously. Frequency converters are modeled in the VSC small time-step of the RTDS and three phase realistic grid is adopted with RSCAD simulation through the use of optical analogue digital converter (OADC) card of the RTDS. Steady state and LVRT characteristics are carried out to validate the proposed operational scheme. Simulation results show good agreement with real time simulation software and thus can be used to validate the controllers for the real time operation. Integration of the Battery Energy Storage System (BESS) with wind farm can smoothen its intermittent power fluctuations. The work also focuses on the real time implementation of the Sodium Sulfur (NaS) type BESS. BESS is integrated with the STATCOM. The main advantage of this system is that it can also provide the reactive power support to the system along with the real power exchange from BESS unit. BESS integrated with STATCOM is modeled in the VSC small time-step of the RTDS. The cascaded vector control scheme is used for the control of the STATCOM and suitable control is developed to control the charging/discharging of the NaS type BESS. Results are compared with Laboratory standard power system software PSCAD/EMTDC and the advantages of using RTDS in dynamic and transient characteristics analyses of wind farm are also demonstrated clearly.

  12. ENSO activity during the last climate cycle using IFA

    NASA Astrophysics Data System (ADS)

    Leduc, Guillaume; Vidal, Laurence; Thirumalai, Kaustubh

    2017-04-01

    The El Niño / Southern Oscillation (ENSO) is the principal mode of interannual climate variability and affects key climate parameters such as low-latitude rainfall variability. Anticipating future ENSO variability under anthropogenic forcing is vital due to its profound socioeconomic impact. Fossil corals suggest that 20th century ENSO variance is particularly high as compared to other time periods of the Holocene (Cobb et al., 2013, Science), the Last Glacial Maximum (Ford et al., 2015, Science) and the last glacial period (Tudhope et al., 2001, Science). Yet, recent climate modeling experiments suggest an increase in the frequency of both El Niño (Cai et al., 2014, Nature Climate Change) and La Niña (Cai et al., 2015, Nature Climate Change) events. We have expanded an Individual Foraminifera Analysis (IFA) dataset using the thermocline-dwelling N. dutertrei on a marine core collected in the Panama Basin (Leduc et al., 2009, Paleoceanography), that has proven to be a skillful way to reconstruct the ENSO (Thirumalai et al., 2013, Paleoceanography). Our new IFA dataset comprehensively covers the Holocene, the last deglaciation and Termination II (MIS5/6) time windows. We will also use previously published data from the Marine Isotope Stage 3 (MIS3). Our dataset confirms variable ENSO intensity during the Holocene and weaker activity during LGM than during the Holocene. As a next step, ENSO activity will be discussed with respect to the contrasting climatic background of the analysed time windows (millenial-scale variability, Terminations).

  13. Dynamic Modeling of the Main Blow in Basic Oxygen Steelmaking Using Measured Step Responses

    NASA Astrophysics Data System (ADS)

    Kattenbelt, Carolien; Roffel, B.

    2008-10-01

    In the control and optimization of basic oxygen steelmaking, it is important to have an understanding of the influence of control variables on the process. However, important process variables such as the composition of the steel and slag cannot be measured continuously. The decarburization rate and the accumulation rate of oxygen, which can be derived from the generally measured waste gas flow and composition, are an indication of changes in steel and slag composition. The influence of the control variables on the decarburization rate and the accumulation rate of oxygen can best be determined in the main blow period. In this article, the measured step responses of the decarburization rate and the accumulation rate of oxygen to step changes in the oxygen blowing rate, lance height, and the addition rate of iron ore during the main blow are presented. These measured step responses are subsequently used to develop a dynamic model for the main blow. The model consists of an iron oxide and a carbon balance and an additional equation describing the influence of the lance height and the oxygen blowing rate on the decarburization rate. With this simple dynamic model, the measured step responses can be explained satisfactorily.

  14. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    ERIC Educational Resources Information Center

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  15. Annual Research Review: Reaction time variability in ADHD and autism spectrum disorders: measurement and mechanisms of a proposed trans-diagnostic phenotype

    PubMed Central

    Karalunas, Sarah L.; Geurts, Hilde M.; Konrad, Kerstin; Bender, Stephan; Nigg, Joel T.

    2014-01-01

    Background Intraindividual variability in reaction time (RT) has received extensive discussion as an indicator of cognitive performance, a putative intermediate phenotype of many clinical disorders, and a possible trans-diagnostic phenotype that may elucidate shared risk factors for mechanisms of psychiatric illnesses. Scope and Methodology Using the examples of attention deficit hyperactivity disorder (ADHD) and autism spectrum disorders (ASD), we discuss RT variability. We first present a new meta-analysis of RT variability in ASD with and without comorbid ADHD. We then discuss potential mechanisms that may account for RT variability and statistical models that disentangle the cognitive processes affecting RTs. We then report a second meta-analysis comparing ADHD and non-ADHD children on diffusion model parameters. We consider how findings inform the search for neural correlates of RT variability. Findings Results suggest that RT variability is increased in ASD only when children with comorbid ADHD are included in the sample. Furthermore, RT variability in ADHD is explained by moderate to large increases (d = 0.63–0.99) in the ex-Gaussian parameter τ and the diffusion parameter drift rate, as well as by smaller differences (d = 0.32) in the diffusion parameter of nondecision time. The former may suggest problems in state regulation or arousal and difficulty detecting signal from noise, whereas the latter may reflect contributions from deficits in motor organization or output. The neuroimaging literature converges with this multicomponent interpretation and also highlights the role of top-down control circuits. Conclusion We underscore the importance of considering the interactions between top-down control, state regulation (e.g. arousal), and motor preparation when interpreting RT variability and conclude that decomposition of the RT signal provides superior interpretive power and suggests mechanisms convergent with those implicated using other cognitive paradigms. We conclude with specific recommendations for the field for next steps in the study of RT variability in neurodevelopmental disorders. PMID:24628425

  16. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  17. Structures of the recurrence plot of heart rate variability signal as a tool for predicting the onset of paroxysmal atrial fibrillation.

    PubMed

    Mohebbi, Maryam; Ghassemian, Hassan; Asl, Babak Mohammadzadeh

    2011-05-01

    This paper aims to propose an effective paroxysmal atrial fibrillation (PAF) predictor which is based on the analysis of the heart rate variability (HRV) signal. Predicting the onset of PAF, based on non-invasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic interventions and to minimize the risks for the patients. This method consists of four steps: Preprocessing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In the next step, the recurrence plot (RP) of HRV signal is obtained and six features are extracted to characterize the basic patterns of the RP. These features consist of length of longest diagonal segments, average length of the diagonal lines, entropy, trapping time, length of longest vertical line, and recurrence trend. In the third step, these features are reduced to three features by the linear discriminant analysis (LDA) technique. Using LDA not only reduces the number of the input features, but also increases the classification accuracy by selecting the most discriminating features. Finally, a support vector machine-based classifier is used to classify the HRV signals. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database which consists of both 30-minutes ECG recordings end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, and positive predictivity were 96.55%, 100%, and 100%, respectively.

  18. A stochastic locomotor control model for the nurse shark, Ginglymostoma cirratum.

    PubMed

    Gerald, K B; Matis, J H; Kleerekoper, H

    1978-06-12

    The locomotor behavior of the nurse shark (Ginglymostoma cirratum) is characterized by 17 variables (frequency and ratios of left, right, and total turns; their radians; straight paths (steps); distance travelled; and velocity) Within each of these variables there is an internal time dependency the structure of which was elaborated together with an improved statistical model predicting their behavior within 90% confidence limits. The model allows for the sensitive detection of subtle locomotor response to sensory stimulation as values of variables may exceed the established confidence limits within minutes after onset of the stimulus. The locomotor activity is well described by an autoregression time series model and can be predicted by only seven variables. Six of these form two independently operating clusters. The first one consists of: the number of right turns, the distance travelled and the mean velocity; the second one of: the mean size of right turns, of left turns, and of all turns. The same clustering is obtained independently by a cluster analysis of cross-sections of the seven time series. It is apparent that, among a total of 17 locomotor variables, seven behave as individually independent agents, presumably controlled by seven separate and independent centers. The output of each center can only be predicted by its own behavior. In spite of the individual of the seven variables, their internal structure is similar in important aspects which may result from control by a common command center. The shark locomotor model differs in important aspects from the previously constructed for the goldfish. The interdependence of the locomotor variables in both species may be related to the control mechanisms postulated by von Holst for the coordination of rhythmic fin movements in fishes. A locomotor control model for the nurse shark is proposed.

  19. Childhood malnutrition in Egypt using geoadditive Gaussian and latent variable models.

    PubMed

    Khatab, Khaled

    2010-04-01

    Major progress has been made over the last 30 years in reducing the prevalence of malnutrition amongst children less than 5 years of age in developing countries. However, approximately 27% of children under the age of 5 in these countries are still malnourished. This work focuses on the childhood malnutrition in one of the biggest developing countries, Egypt. This study examined the association between bio-demographic and socioeconomic determinants and the malnutrition problem in children less than 5 years of age using the 2003 Demographic and Health survey data for Egypt. In the first step, we use separate geoadditive Gaussian models with the continuous response variables stunting (height-for-age), underweight (weight-for-age), and wasting (weight-for-height) as indicators of nutritional status in our case study. In a second step, based on the results of the first step, we apply the geoadditive Gaussian latent variable model for continuous indicators in which the 3 measurements of the malnutrition status of children are assumed as indicators for the latent variable "nutritional status".

  20. Clustering of longitudinal data by using an extended baseline: A new method for treatment efficacy clustering in longitudinal data.

    PubMed

    Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine

    2018-01-01

    Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.

  1. Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms

    PubMed Central

    Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun

    2011-01-01

    This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927

  2. A novel and simple test of gait adaptability predicts gold standard measures of functional mobility in stroke survivors.

    PubMed

    Hollands, K L; Pelton, T A; van der Veen, S; Alharbi, S; Hollands, M A

    2016-01-01

    Although there is evidence that stroke survivors have reduced gait adaptability, the underlying mechanisms and the relationship to functional recovery are largely unknown. We explored the relationships between walking adaptability and clinical measures of balance, motor recovery and functional ability in stroke survivors. Stroke survivors (n=42) stepped to targets, on a 6m walkway, placed to elicit step lengthening, shortening and narrowing on paretic and non-paretic sides. The number of targets missed during six walks and target stepping speed was recorded. Fugl-Meyer (FM), Berg Balance Scale (BBS), self-selected walking speed (SWWS) and single support (SS) and step length (SL) symmetry (using GaitRite when not walking to targets) were also assessed. Stepwise multiple-linear regression was used to model the relationships between: total targets missed, number missed with paretic and non-paretic legs, target stepping speed, and each clinical measure. Regression revealed a significant model for each outcome variable that included only one independent variable. Targets missed by the paretic limb, was a significant predictor of FM (F(1,40)=6.54, p=0.014,). Speed of target stepping was a significant predictor of each of BBS (F(1,40)=26.36, p<0.0001), SSWS (F(1,40)=37.00, p<0.0001). No variables were significant predictors of SL or SS asymmetry. Speed of target stepping was significantly predictive of BBS and SSWS and paretic targets missed predicted FM, suggesting that fast target stepping requires good balance and accurate stepping demands good paretic leg function. The relationships between these parameters indicate gait adaptability is a clinically meaningful target for measurement and treatment of functionally adaptive walking ability in stroke survivors. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. The synchronisation of lower limb responses with a variable metronome: the effect of biomechanical constraints on timing.

    PubMed

    Chen, Hui-Ya; Wing, Alan M; Pratt, David

    2006-04-01

    Stepping in time with a metronome has been reported to improve pathological gait. Although there have been many studies of finger tapping synchronisation tasks with a metronome, the specific details of the influences of metronome timing on walking remain unknown. As a preliminary to studying pathological control of gait timing, we designed an experiment with four synchronisation tasks, unilateral heel tapping in sitting, bilateral heel tapping in sitting, bilateral heel tapping in standing, and stepping on the spot, in order to examine the influence of biomechanical constraints on metronome timing. These four conditions allow study of the effects of bilateral co-ordination and maintenance of balance on timing. Eight neurologically normal participants made heel tapping and stepping responses in synchrony with a metronome producing 500 ms interpulse intervals. In each trial comprising 40 intervals, one interval, selected at random between intervals 15 and 30, was lengthened or shortened, which resulted in a shift in phase of all subsequent metronome pulses. Performance measures were the speed of compensation for the phase shift, in terms of the temporal difference between the response and the metronome pulse, i.e. asynchrony, and the standard deviation of the asynchronies and interresponse intervals of steady state synchronisation. The speed of compensation decreased with increase in the demands of maintaining balance. The standard deviation varied across conditions but was not related to the compensation speed. The implications of these findings for metronome assisted gait are discussed in terms of a first-order linear correction account of synchronisation.

  4. Effect of water hardness on cardiovascular mortality: an ecological time series approach.

    PubMed

    Lake, I R; Swift, L; Catling, L A; Abubakar, I; Sabel, C E; Hunter, P R

    2010-12-01

    Numerous studies have suggested an inverse relationship between drinking water hardness and cardiovascular disease. However, the weight of evidence is insufficient for the WHO to implement a health-based guideline for water hardness. This study followed WHO recommendations to assess the feasibility of using ecological time series data from areas exposed to step changes in water hardness to investigate this issue. Monthly time series of cardiovascular mortality data, subdivided by age and sex, were systematically collected from areas reported to have undergone step changes in water hardness, calcium and magnesium in England and Wales between 1981 and 2005. Time series methods were used to investigate the effect of water hardness changes on mortality. No evidence was found of an association between step changes in drinking water hardness or drinking water calcium and cardiovascular mortality. The lack of areas with large populations and a reasonable change in magnesium levels precludes a definitive conclusion about the impact of this cation. We use our results on the variability of the series to consider the data requirements (size of population, time of water hardness change) for such a study to have sufficient power. Only data from areas with large populations (>500,000) are likely to be able to detect a change of the size suggested by previous studies (rate ratio of 1.06). Ecological time series studies of populations exposed to changes in drinking water hardness may not be able to provide conclusive evidence on the links between water hardness and cardiovascular mortality unless very large populations are studied. Investigations of individuals may be more informative.

  5. Development and external validation of new ultrasound-based mathematical models for preoperative prediction of high-risk endometrial cancer.

    PubMed

    Van Holsbeke, C; Ameye, L; Testa, A C; Mascilini, F; Lindqvist, P; Fischerova, D; Frühauf, F; Fransis, S; de Jonge, E; Timmerman, D; Epstein, E

    2014-05-01

    To develop and validate strategies, using new ultrasound-based mathematical models, for the prediction of high-risk endometrial cancer and compare them with strategies using previously developed models or the use of preoperative grading only. Women with endometrial cancer were prospectively examined using two-dimensional (2D) and three-dimensional (3D) gray-scale and color Doppler ultrasound imaging. More than 25 ultrasound, demographic and histological variables were analyzed. Two logistic regression models were developed: one 'objective' model using mainly objective variables; and one 'subjective' model including subjective variables (i.e. subjective impression of myometrial and cervical invasion, preoperative grade and demographic variables). The following strategies were validated: a one-step strategy using only preoperative grading and two-step strategies using preoperative grading as the first step and one of the new models, subjective assessment or previously developed models as a second step. One-hundred and twenty-five patients were included in the development set and 211 were included in the validation set. The 'objective' model retained preoperative grade and minimal tumor-free myometrium as variables. The 'subjective' model retained preoperative grade and subjective assessment of myometrial invasion. On external validation, the performance of the new models was similar to that on the development set. Sensitivity for the two-step strategy with the 'objective' model was 78% (95% CI, 69-84%) at a cut-off of 0.50, 82% (95% CI, 74-88%) for the strategy with the 'subjective' model and 83% (95% CI, 75-88%) for that with subjective assessment. Specificity was 68% (95% CI, 58-77%), 72% (95% CI, 62-80%) and 71% (95% CI, 61-79%) respectively. The two-step strategies detected up to twice as many high-risk cases as preoperative grading only. The new models had a significantly higher sensitivity than did previously developed models, at the same specificity. Two-step strategies with 'new' ultrasound-based models predict high-risk endometrial cancers with good accuracy and do this better than do previously developed models. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.

  6. Scalable asynchronous execution of cellular automata

    NASA Astrophysics Data System (ADS)

    Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo

    2016-10-01

    The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.

  7. Dual-task as a predictor of falls in older people with mild cognitive impairment and mild Alzheimer's disease: a prospective cohort study.

    PubMed

    Gonçalves, Jessica; Ansai, Juliana Hotta; Masse, Fernando Arturo Arriagada; Vale, Francisco Assis Carvalho; Takahashi, Anielle Cristhine de Medeiros; Andrade, Larissa Pires de

    2018-04-04

    A dual-task tool with a challenging and daily secondary task, which involves executive functions, could facilitate the screening for risk of falls in older people with mild cognitive impairment or mild Alzheimer's disease. To verify if a motor-cognitive dual-task test could predict falls in older people with mild cognitive impairment or mild Alzheimer's disease, and to establish cutoff scores for the tool for both groups. A prospective study was conducted with community-dwelling older adults, including 40 with mild cognitive impairment and 38 with mild Alzheimer's disease. The dual-task test consisted of the Timed up and Go Test associated with a motor-cognitive task using a phone to call. Falls were recorded during six months by calendar and monthly telephone calls and the participants were categorized as fallers or non-fallers. In the Mild cognitive impairment Group, fallers presented higher values in time (35.2s), number of steps (33.7 steps) and motor task cost (116%) on dual-task compared to non-fallers. Time, number of steps and motor task cost were significantly associated with falls in people with mild cognitive impairment. Multivariate analysis identified higher number of steps spent on the test to be independently associated with falls. A time greater than 23.88s (sensitivity=80%; specificity=61%) and a number of steps over 29.50 (sensitivity=65%; specificity=83%) indicated prediction of risk of falls in the Mild cognitive impairment Group. Among people with Alzheimer's disease, no differences in dual-task between fallers and non-fallers were found and no variable of the tool was able to predict falls. The dual-task predicts falls only in older people with mild cognitive impairment. Copyright © 2018 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.

  8. PETOOL: MATLAB-based one-way and two-way split-step parabolic equation tool for radiowave propagation over variable terrain

    NASA Astrophysics Data System (ADS)

    Ozgun, Ozlem; Apaydin, Gökhan; Kuzuoglu, Mustafa; Sevgi, Levent

    2011-12-01

    A MATLAB-based one-way and two-way split-step parabolic equation software tool (PETOOL) has been developed with a user-friendly graphical user interface (GUI) for the analysis and visualization of radio-wave propagation over variable terrain and through homogeneous and inhomogeneous atmosphere. The tool has a unique feature over existing one-way parabolic equation (PE)-based codes, because it utilizes the two-way split-step parabolic equation (SSPE) approach with wide-angle propagator, which is a recursive forward-backward algorithm to incorporate both forward and backward waves into the solution in the presence of variable terrain. First, the formulation of the classical one-way SSPE and the relatively-novel two-way SSPE is presented, with particular emphasis on their capabilities and the limitations. Next, the structure and the GUI capabilities of the PETOOL software tool are discussed in detail. The calibration of PETOOL is performed and demonstrated via analytical comparisons and/or representative canonical tests performed against the Geometric Optic (GO) + Uniform Theory of Diffraction (UTD). The tool can be used for research and/or educational purposes to investigate the effects of a variety of user-defined terrain and range-dependent refractivity profiles in electromagnetic wave propagation. Program summaryProgram title: PETOOL (Parabolic Equation Toolbox) Catalogue identifier: AEJS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 143 349 No. of bytes in distributed program, including test data, etc.: 23 280 251 Distribution format: tar.gz Programming language: MATLAB (MathWorks Inc.) 2010a. Partial Differential Toolbox and Curve Fitting Toolbox required Computer: PC Operating system: Windows XP and Vista Classification: 10 Nature of problem: Simulation of radio-wave propagation over variable terrain on the Earth's surface, and through homogeneous and inhomogeneous atmosphere. Solution method: The program implements one-way and two-way Split-Step Parabolic Equation (SSPE) algorithm, with wide-angle propagator. The SSPE is, in general, an initial-value problem starting from a reference range (typically from an antenna), and marching out in range by obtaining the field along the vertical direction at each range step, through the use of step-by-step Fourier transformations. The two-way algorithm incorporates the backward-propagating waves into the standard one-way SSPE by utilizing an iterative forward-backward scheme for modeling multipath effects over a staircase-approximated terrain. Unusual features: This is the first software package implementing a recursive forward-backward SSPE algorithm to account for the multipath effects during radio-wave propagation, and enabling the user to easily analyze and visualize the results of the two-way propagation with GUI capabilities. Running time: Problem dependent. Typically, it is about 1.5 ms (for conducting ground) and 4 ms (for lossy ground) per range step for a vertical field profile of vector length 1500, on Intel Core 2 Duo 1.6 GHz with 2 GB RAM under Windows Vista.

  9. Variability of rainfall over Lake Kariba catchment area in the Zambezi river basin, Zimbabwe

    NASA Astrophysics Data System (ADS)

    Muchuru, Shepherd; Botai, Joel O.; Botai, Christina M.; Landman, Willem A.; Adeola, Abiodun M.

    2016-04-01

    In this study, average monthly and annual rainfall totals recorded for the period 1970 to 2010 from a network of 13 stations across the Lake Kariba catchment area of the Zambezi river basin were analyzed in order to characterize the spatial-temporal variability of rainfall across the catchment area. In the analysis, the data were subjected to intervention and homogeneity analysis using the Cumulative Summation (CUSUM) technique and step change analysis using rank-sum test. Furthermore, rainfall variability was characterized by trend analysis using the non-parametric Mann-Kendall statistic. Additionally, the rainfall series were decomposed and the spectral characteristics derived using Cross Wavelet Transform (CWT) and Wavelet Coherence (WC) analysis. The advantage of using the wavelet-based parameters is that they vary in time and can therefore be used to quantitatively detect time-scale-dependent correlations and phase shifts between rainfall time series at various localized time-frequency scales. The annual and seasonal rainfall series were homogeneous and demonstrated no apparent significant shifts. According to the inhomogeneity classification, the rainfall series recorded across the Lake Kariba catchment area belonged to category A (useful) and B (doubtful), i.e., there were zero to one and two absolute tests rejecting the null hypothesis (at 5 % significance level), respectively. Lastly, the long-term variability of the rainfall series across the Lake Kariba catchment area exhibited non-significant positive and negative trends with coherent oscillatory modes that are constantly locked in phase in the Morlet wavelet space.

  10. Walk Ratio (Step Length/Cadence) as a Summary Index of Neuromotor Control of Gait: Application to Multiple Sclerosis

    ERIC Educational Resources Information Center

    Rota, Viviana; Perucca, Laura; Simone, Anna; Tesio, Luigi

    2011-01-01

    In healthy adults, the step length/cadence ratio [walk ratio (WR) in mm/(steps/min) and normalized for height] is known to be constant around 6.5 mm/(step/min). It is a speed-independent index of the overall neuromotor gait control, in as much as it reflects energy expenditure, balance, between-step variability, and attentional demand. The speed…

  11. Analysis of model development strategies: predicting ventral hernia recurrence.

    PubMed

    Holihan, Julie L; Li, Linda T; Askenasy, Erik P; Greenberg, Jacob A; Keith, Jerrod N; Martindale, Robert G; Roth, J Scott; Liang, Mike K

    2016-11-01

    There have been many attempts to identify variables associated with ventral hernia recurrence; however, it is unclear which statistical modeling approach results in models with greatest internal and external validity. We aim to assess the predictive accuracy of models developed using five common variable selection strategies to determine variables associated with hernia recurrence. Two multicenter ventral hernia databases were used. Database 1 was randomly split into "development" and "internal validation" cohorts. Database 2 was designated "external validation". The dependent variable for model development was hernia recurrence. Five variable selection strategies were used: (1) "clinical"-variables considered clinically relevant, (2) "selective stepwise"-all variables with a P value <0.20 were assessed in a step-backward model, (3) "liberal stepwise"-all variables were included and step-backward regression was performed, (4) "restrictive internal resampling," and (5) "liberal internal resampling." Variables were included with P < 0.05 for the Restrictive model and P < 0.10 for the Liberal model. A time-to-event analysis using Cox regression was performed using these strategies. The predictive accuracy of the developed models was tested on the internal and external validation cohorts using Harrell's C-statistic where C > 0.70 was considered "reasonable". The recurrence rate was 32.9% (n = 173/526; median/range follow-up, 20/1-58 mo) for the development cohort, 36.0% (n = 95/264, median/range follow-up 20/1-61 mo) for the internal validation cohort, and 12.7% (n = 155/1224, median/range follow-up 9/1-50 mo) for the external validation cohort. Internal validation demonstrated reasonable predictive accuracy (C-statistics = 0.772, 0.760, 0.767, 0.757, 0.763), while on external validation, predictive accuracy dipped precipitously (C-statistic = 0.561, 0.557, 0.562, 0.553, 0.560). Predictive accuracy was equally adequate on internal validation among models; however, on external validation, all five models failed to demonstrate utility. Future studies should report multiple variable selection techniques and demonstrate predictive accuracy on external data sets for model validation. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. An inexpensive frequency-modulated (FM) audio monitor of time-dependent analog parameters.

    PubMed

    Langdon, R B; Jacobs, R S

    1980-02-01

    The standard method for quantification and presentation of an experimental variable in real time is the use of visual display on the ordinate of an oscilloscope screen or chart recorder. This paper describes a relatively simple electronic circuit, using commercially available and inexpensive integrated circuits (IC), which generates an audible tone, the pitch of which varies in proportion to a running variable of interest. This device, which we call an "Audioscope," can accept as input the monitor output from any instrument that expresses an experimental parameter as a dc voltage. The Audioscope is particularly useful in implanting microelectrodes intracellularly. It may also function to mediate the first step in data recording on magnetic tape, and/or data analysis and reduction by electronic circuitary. We estimate that this device can be built, with two-channel capability, for less than $50, and in less than 10 hr by an experienced electronics technician.

  13. Modeling and predicting intertidal variations of the salinity field in the Bay/Delta

    USGS Publications Warehouse

    Knowles, Noah; Uncles, Reginald J.

    1995-01-01

    One approach to simulating daily to monthly variability in the bay is the development of intertidal model using tidally-averaged equations and a time step on the order of the day.  An intertidal numerical model of the bay's physics, capable of portraying seasonal and inter-annual variability, would have several uses.  Observations are limited in time and space, so simulation could help fill the gaps.  Also, the ability to simulate multi-year episodes (eg, an extended drought) could provide insight into the response of the ecosystem to such events.  Finally, such a model could be used in a forecast mode wherein predicted delta flow is used as model input, and predicted salinity distribution is output with estimates days and months in advance.  This note briefly introduces such a tidally-averaged model (Uncles and Peterson, in press) and a corresponding predictive scheme for baywide forecasting.

  14. How valid are wearable physical activity trackers for measuring steps?

    PubMed

    An, Hyun-Sung; Jones, Gregory C; Kang, Seoung-Ki; Welk, Gregory J; Lee, Jung-Min

    2017-04-01

    Wearable activity trackers have become popular for tracking individual's daily physical activity, but little information is available to substantiate the validity of these devices in step counts. Thirty-five healthy individuals completed three conditions of activity tracker measurement: walking/jogging on a treadmill, walking over-ground on an indoor track, and a 24-hour free-living condition. Participants wore 10 activity trackers at the same time for both treadmill and over-ground protocol. Of these 10 activity trackers three were randomly given for 24-hour free-living condition. Correlations of steps measured to steps observed were r = 0.84 and r = 0.67 on a treadmill and over-ground protocol, respectively. The mean MAPE (mean absolute percentage error) score for all devices and speeds on a treadmill was 8.2% against manually counted steps. The MAPE value was higher for over-ground walking (9.9%) and even higher for the 24-hour free-living period (18.48%) on step counts. Equivalence testing for step count measurement resulted in a significant level within ±5% for the Fitbit Zip, Withings Pulse, and Jawbone UP24 and within ±10% for the Basis B1 band, Garmin VivoFit, and SenseWear Armband Mini. The results show that the Fitbit Zip and Withings Pulse provided the most accurate measures of step count under all three different conditions (i.e. treadmill, over-ground, and 24-hour condition), and considerable variability in accuracy across monitors and also by speeds and conditions.

  15. A Variable Step-Size Proportionate Affine Projection Algorithm for Identification of Sparse Impulse Response

    NASA Astrophysics Data System (ADS)

    Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong

    2009-12-01

    Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.

  16. Multicomponent physical exercise with simultaneous cognitive training to enhance dual-task walking of older adults: a secondary analysis of a 6-month randomized controlled trial with 1-year follow-up.

    PubMed

    Eggenberger, Patrick; Theill, Nathan; Holenstein, Stefan; Schumacher, Vera; de Bruin, Eling D

    2015-01-01

    About one-third of people older than 65 years fall at least once a year. Physical exercise has been previously demonstrated to improve gait, enhance physical fitness, and prevent falls. Nonetheless, the addition of cognitive training components may potentially increase these effects, since cognitive impairment is related to gait irregularities and fall risk. We hypothesized that simultaneous cognitive-physical training would lead to greater improvements in dual-task (DT) gait compared to exclusive physical training. Elderly persons older than 70 years and without cognitive impairment were randomly assigned to the following groups: 1) virtual reality video game dancing (DANCE), 2) treadmill walking with simultaneous verbal memory training (MEMORY), or 3) treadmill walking (PHYS). Each program was complemented with strength and balance exercises. Two 1-hour training sessions per week over 6 months were applied. Gait variables, functional fitness (Short Physical Performance Battery, 6-minute walk), and fall frequencies were assessed at baseline, after 3 months and 6 months, and at 1-year follow-up. Multiple regression analyses with planned comparisons were carried out. Eighty-nine participants were randomized to three groups initially; 71 completed the training and 47 were available at 1-year follow-up. DANCE/MEMORY showed a significant advantage compared to PHYS in DT costs of step time variability at fast walking (P=0.044). Training-specific gait adaptations were found on comparing DANCE and MEMORY: DANCE reduced step time at fast walking (P=0.007) and MEMORY reduced gait variability in DT and DT costs at preferred walking speed (both trend P=0.062). Global linear time effects showed improved gait (P<0.05), functional fitness (P<0.05), and reduced fall frequency (-77%, P<0.001). Only single-task fast walking, gait variability at preferred walking speed, and Short Physical Performance Battery were reduced at follow-up (all P<0.05 or trend). Long-term multicomponent cognitive-physical and exclusive physical training programs demonstrated similar potential to counteract age-related decline in physical functioning.

  17. Multicomponent physical exercise with simultaneous cognitive training to enhance dual-task walking of older adults: a secondary analysis of a 6-month randomized controlled trial with 1-year follow-up

    PubMed Central

    Eggenberger, Patrick; Theill, Nathan; Holenstein, Stefan; Schumacher, Vera; de Bruin, Eling D

    2015-01-01

    Background About one-third of people older than 65 years fall at least once a year. Physical exercise has been previously demonstrated to improve gait, enhance physical fitness, and prevent falls. Nonetheless, the addition of cognitive training components may potentially increase these effects, since cognitive impairment is related to gait irregularities and fall risk. We hypothesized that simultaneous cognitive–physical training would lead to greater improvements in dual-task (DT) gait compared to exclusive physical training. Methods Elderly persons older than 70 years and without cognitive impairment were randomly assigned to the following groups: 1) virtual reality video game dancing (DANCE), 2) treadmill walking with simultaneous verbal memory training (MEMORY), or 3) treadmill walking (PHYS). Each program was complemented with strength and balance exercises. Two 1-hour training sessions per week over 6 months were applied. Gait variables, functional fitness (Short Physical Performance Battery, 6-minute walk), and fall frequencies were assessed at baseline, after 3 months and 6 months, and at 1-year follow-up. Multiple regression analyses with planned comparisons were carried out. Results Eighty-nine participants were randomized to three groups initially; 71 completed the training and 47 were available at 1-year follow-up. DANCE/MEMORY showed a significant advantage compared to PHYS in DT costs of step time variability at fast walking (P=0.044). Training-specific gait adaptations were found on comparing DANCE and MEMORY: DANCE reduced step time at fast walking (P=0.007) and MEMORY reduced gait variability in DT and DT costs at preferred walking speed (both trend P=0.062). Global linear time effects showed improved gait (P<0.05), functional fitness (P<0.05), and reduced fall frequency (−77%, P<0.001). Only single-task fast walking, gait variability at preferred walking speed, and Short Physical Performance Battery were reduced at follow-up (all P<0.05 or trend). Conclusion Long-term multicomponent cognitive–physical and exclusive physical training programs demonstrated similar potential to counteract age-related decline in physical functioning. PMID:26604719

  18. The impact of land use change and hydroclimatic variability on landscape connectivity dynamics across surface water networks at subcontinental scale

    NASA Astrophysics Data System (ADS)

    Tulbure, M. G.; Bishop-Taylor, R.; Broich, M.

    2017-12-01

    Land use (LU) change and hydroclimatic variability affect spatiotemporal landscape connectivity dynamics, important for species movement and dispersal. Despite the fact that LU change can strongly influence dispersal potential over time, prior research has only focused on the impacts of dynamic changes in the distribution of potential habitats. We used 8 time-steps of historical LU together with a Landsat-derived time-series of surface water habitat dynamics (1986-2011) over the Murray-Darling Basin (MDB), a region with extreme hydroclimatic variability, impacted by LU changes. To assess how changing LU and hydroclimatic variability affect landscape connectivity across time, we compared 4 scenarios, namely one where both climate and LU are dynamic over time, one where climate is kept steady (i.e. a median surface water extent layer), and two scenarios where LU is kept steady (i.e. resistance values associated with the most recent or the first LU layer). We used circuit theory to assign landscape features with `resistance' costs and graph theory network analysis, with surface water habitats as `nodes' connected by dispersal paths or `edges' Findings comparing a dry and an average season show high differences in number of nodes (14581 vs 21544) and resistance distances. The combined effect of LU change and landscape wetness was lower than expected, likely a function of the large, MDB-wide, aggregation scale. Spatially explicit analyses are expected to identify areas where the synergistic effect of LU change and landscape wetness greatly reduce or increase landscape connectivity, as well as areas where the two effects cancel each other out.

  19. Long-wave Irradiance Measurement and Modeling during Snowmelt, a Case Study in the Yukon Territory, Canada

    NASA Astrophysics Data System (ADS)

    Sicart, J.; Essery, R.; Pomeroy, J.

    2004-12-01

    At high latitudes, long-wave radiation emitted by the atmosphere and solar radiation can provide similar amounts of energy for snowmelt due to the low solar elevation and the high albedo of snow. This paper investigates temporal and spatial variations of long-wave irradiance at the snow surface in an open sub-Arctic environment. Measurements were conducted in the Wolf Creek Research Basin, Yukon Territory, Canada (60°36'N, 134°57'W) during the springs of 2002, 2003 and 2004. The main causes of temporal variability are air temperature and cloud cover, especially in the beginning of the melting period when the atmosphere is still cold. Spatial variability was investigated through a sensitivity study to sky view factors and to temperatures of surrounding terrain. The formula of Brutsaert gives a useful estimation of the clear-sky irradiance at hourly time steps. Emission by clouds was parameterized at the daily time scale from the atmospheric attenuation of solar radiation. The inclusion of air temperature variability does not much improve the calculation of cloud emission.

  20. Shallow-water sloshing in a moving vessel with variable cross-section and wetting-drying using an extension of George's well-balanced finite volume solver

    NASA Astrophysics Data System (ADS)

    Alemi Ardakani, Hamid; Bridges, Thomas J.; Turner, Matthew R.

    2016-06-01

    A class of augmented approximate Riemann solvers due to George (2008) [12] is extended to solve the shallow-water equations in a moving vessel with variable bottom topography and variable cross-section with wetting and drying. A class of Roe-type upwind solvers for the system of balance laws is derived which respects the steady-state solutions. The numerical solutions of the new adapted augmented f-wave solvers are validated against the Roe-type solvers. The theory is extended to solve the shallow-water flows in moving vessels with arbitrary cross-section with influx-efflux boundary conditions motivated by the shallow-water sloshing in the ocean wave energy converter (WEC) proposed by Offshore Wave Energy Ltd. (OWEL) [1]. A fractional step approach is used to handle the time-dependent forcing functions. The numerical solutions are compared to an extended new Roe-type solver for the system of balance laws with a time-dependent source function. The shallow-water sloshing finite volume solver can be coupled to a Runge-Kutta integrator for the vessel motion.

  1. Interlaboratory comparison of real-time pcr protocols for quantification of general fecal indicator bacteria

    USGS Publications Warehouse

    Shanks, O.C.; Sivaganesan, M.; Peed, L.; Kelty, C.A.; Blackwood, A.D.; Greene, M.R.; Noble, R.T.; Bushon, R.N.; Stelzer, E.A.; Kinzelman, J.; Anan'Eva, T.; Sinigalliano, C.; Wanless, D.; Griffith, J.; Cao, Y.; Weisberg, S.; Harwood, V.J.; Staley, C.; Oshima, K.H.; Varma, M.; Haugland, R.A.

    2012-01-01

    The application of quantitative real-time PCR (qPCR) technologies for the rapid identification of fecal bacteria in environmental waters is being considered for use as a national water quality metric in the United States. The transition from research tool to a standardized protocol requires information on the reproducibility and sources of variation associated with qPCR methodology across laboratories. This study examines interlaboratory variability in the measurement of enterococci and Bacteroidales concentrations from standardized, spiked, and environmental sources of DNA using the Entero1a and GenBac3 qPCR methods, respectively. Comparisons are based on data generated from eight different research facilities. Special attention was placed on the influence of the DNA isolation step and effect of simplex and multiplex amplification approaches on interlaboratory variability. Results suggest that a crude lysate is sufficient for DNA isolation unless environmental samples contain substances that can inhibit qPCR amplification. No appreciable difference was observed between simplex and multiplex amplification approaches. Overall, interlaboratory variability levels remained low (<10% coefficient of variation) regardless of qPCR protocol. ?? 2011 American Chemical Society.

  2. Perceptual-motor regulation in locomotor pointing while approaching a curb.

    PubMed

    Andel, Steven van; Cole, Michael H; Pepping, Gert-Jan

    2018-02-01

    Locomotor pointing is a task that has been the focus of research in the context of sport (e.g. long jumping and cricket) as well as normal walking. Collectively, these studies have produced a broad understanding of locomotor pointing, but generalizability has been limited to laboratory type tasks and/or tasks with high spatial demands. The current study aimed to generalize previous findings in locomotor pointing to the common daily task of approaching and stepping on to a curb. Sixteen people completed 33 repetitions of a task that required them to walk up to and step onto a curb. Information about their foot placement was collected using a combination of measures derived from a pressure-sensitive walkway and video data. Variables related to perceptual-motor regulation were analyzed on an inter-trial, intra-step and inter-step level. Similar to previous studies, analysis of the foot placements showed that, variability in foot placement decreased as the participants drew closer to the curb. Regulation seemed to be initiated earlier in this study compared to previous studies, as shown by a decreasing variability in foot placement as early as eight steps before reaching the curb. Furthermore, it was shown that when walking up to the curb, most people regulated their walk in a way so as to achieve minimal variability in the foot placement on top of the curb, rather than a placement in front of the curb. Combined, these results showed a strong perceptual-motor coupling in the task of approaching and stepping up a curb, rendering this task a suitable test for perceptual-motor regulation in walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Assimilating AmeriFlux Site Data into the Community Land Model with Carbon-Nitrogen Coupling via the Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Law, B. E.; Williams, M. D.; Stoeckli, R.; Thornton, P. E.; Hudiburg, T. M.; Thomas, C. K.; Martin, J.; Hill, T. C.

    2009-12-01

    The assimilation of terrestrial carbon, water and nutrient cycle measurements into land surface models of these processes is fundamental to improving our ability to predict how these ecosystems may respond to climate change. A combination of measurements and models, each with their own systematic biases, must be considered when constraining the nonlinear behavior of these coupled dynamics. As such, we use the sequential Ensemble Kalman Filter (EnKF) to assimilate eddy covariance (EC) and other site-level AmeriFlux measurements into the NCAR Community Land Model with Carbon-Nitrogen coupling (CLM-CN v3.5), run in single-column mode at a 30-minute time step, to improve estimates of relatively unconstrained model state variables and parameters. Specifically, we focus on a semi-arid ponderosa pine site (US-ME2) in the Pacific Northwest to identify the mechanisms by which this ecosystem responds to severe late summer drought. Our EnKF analysis includes water, carbon, energy and nitrogen state variables (e.g., 10 volumetric soil moisture levels (0-3.43 m), ponderosa pine and shrub evapotranspiration and net ecosystem exchange of carbon dioxide stocks and flux components, snow depth, etc.) and associated parameters (e.g., PFT-level rooting distribution parameters, maximum subsurface runoff coefficient, soil hydraulic conductivity decay factor, snow aging parameters, maximum canopy conductance, C:N ratios, etc.). The effectiveness of the EnKF in constraining state variables and associated parameters is sensitive to their relative frequencies, in that C-N state variables and parameters with long time constants require similarly long time series in the analysis. We apply the EnKF kernel perturbation routine to disrupt preliminary convergence of covariances, which has been found in recent studies to be a problem more characteristic of low frequency vegetation state variables and parameters than high frequency ones more heavily coupled with highly varying climate (e.g., shallow soil moisture, snow depth). Preliminary results demonstrate that the assimilation of EC and other available AmeriFlux site physical, chemical and biological data significantly helps quantify and reduce CLM-CN model uncertainties and helps to constrain ‘hidden’ states and parameters that are essential in the coupled water, carbon, energy and nutrient dynamics of these sites. Such site-level calibration of CLM-CN is an initial step in identifying model deficiencies and in forecasts of future ecosystem responses to climate change.

  4. Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.

    2016-12-01

    We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).

  5. Molecular Dynamics of Flexible Polar Cations in a Variable Confined Space: Toward Exceptional Two-Step Nonlinear Optical Switches.

    PubMed

    Xu, Wei-Jian; He, Chun-Ting; Ji, Cheng-Min; Chen, Shao-Li; Huang, Rui-Kang; Lin, Rui-Biao; Xue, Wei; Luo, Jun-Hua; Zhang, Wei-Xiong; Chen, Xiao-Ming

    2016-07-01

    The changeable molecular dynamics of flexible polar cations in the variable confined space between inorganic chains brings about a new type of two-step nonlinear optical (NLO) switch with genuine "off-on-off" second harmonic generation (SHG) conversion between one NLO-active state and two NLO-inactive states. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Multilevel resistive information storage and retrieval

    DOEpatents

    Lohn, Andrew; Mickel, Patrick R.

    2016-08-09

    The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified information retrieval.

  7. Effect of Microstructure on Time Dependent Fatigue Crack Growth Behavior In a P/M Turbine Disk Alloy

    NASA Technical Reports Server (NTRS)

    Telesman, Ignacy J.; Gabb, T. P.; Bonacuse, P.; Gayda, J.

    2008-01-01

    A study was conducted to determine the processes which govern hold time crack growth behavior in the LSHR disk P/M superalloy. Nineteen different heat treatments of this alloy were evaluated by systematically controlling the cooling rate from the supersolvus solutioning step and applying various single and double step aging treatments. The resulting hold time crack growth rates varied by more than two orders of magnitude. It was shown that the associated stress relaxation behavior for these heat treatments was closely correlated with the crack growth behavior. As stress relaxation increased, the hold time crack growth resistance was also increased. The size of the tertiary gamma' in the general microstructure was found to be the key microstructural variable controlling both the hold time crack growth behavior and stress relaxation. No relationship between the presence of grain boundary M23C6 carbides and hold time crack growth was identified which further brings into question the importance of the grain boundary phases in determining hold time crack growth behavior. The linear elastic fracture mechanics parameter, Kmax, is unable to account for visco-plastic redistribution of the crack tip stress field during hold times and thus is inadequate for correlating time dependent crack growth data. A novel methodology was developed which captures the intrinsic crack driving force and was able to collapse hold time crack growth data onto a single curve.

  8. Quantifying in-stream nitrate reaction rates using continuously-collected water quality data

    Treesearch

    Matthew Miller; Anthony Tesoriero; Paul Capel

    2016-01-01

    High frequency in situ nitrate data from three streams of varying hydrologic condition, land use, and watershed size were used to quantify the mass loading of nitrate to streams from two sources – groundwater discharge and event flow – at a daily time step for one year. These estimated loadings were used to quantify temporally-variable in-stream nitrate processing ...

  9. Human-in-the-loop Bayesian optimization of wearable device parameters

    PubMed Central

    Malcolm, Philippe; Speeckaert, Jozefien; Siviy, Christoper J.; Walsh, Conor J.; Kuindersma, Scott

    2017-01-01

    The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of metabolic cost as a performance metric creates significant experimental challenges due to its long measurement times and low signal-to-noise ratio. We evaluate the use of Bayesian optimization—a family of sample-efficient, noise-tolerant, and global optimization methods—for quickly identifying near-optimal control parameters. To manage experimental complexity and provide comparisons against related work, we consider the task of minimizing metabolic cost by optimizing walking step frequencies in unaided human subjects. Compared to an existing approach based on gradient descent, Bayesian optimization identified a near-optimal step frequency with a faster time to convergence (12 minutes, p < 0.01), smaller inter-subject variability in convergence time (± 2 minutes, p < 0.01), and lower overall energy expenditure (p < 0.01). PMID:28926613

  10. Antigen Masking During Fixation and Embedding, Dissected

    PubMed Central

    Scalia, Carla Rossana; Boi, Giovanna; Bolognesi, Maddalena Maria; Riva, Lorella; Manzoni, Marco; DeSmedt, Linde; Bosisio, Francesca Maria; Ronchi, Susanna; Leone, Biagio Eugenio; Cattoretti, Giorgio

    2016-01-01

    Antigen masking in routinely processed tissue is a poorly understood process caused by multiple factors. We sought to dissect the effect on antigenicity of each step of processing by using frozen sections as proxies of the whole tissue. An equivalent extent of antigen masking occurs across variable fixation times at room temperature. Most antigens benefit from longer fixation times (>24 hr) for optimal detection after antigen retrieval (AR; for example, Ki-67, bcl-2, ER). The transfer to a graded alcohol series results in an enhanced staining effect, reproduced by treating the sections with detergents, possibly because of a better access of the polymeric immunohistochemical detection system to tissue structures. A second round of masking occurs upon entering the clearing agent, mostly at the paraffin embedding step. This may depend on the non-freezable water removal. AR fully reverses the masking due both to the fixation time and the paraffin embedding. AR itself destroys some epitopes which do not survive routine processing. Processed frozen sections are a tool to investigate fixation and processing requirements for antigens in routine specimens. PMID:27798289

  11. Variability in Costs across Hospital Wards. A Study of Chinese Hospitals

    PubMed Central

    Adam, Taghreed; Evans, David B.; Ying, Bian; Murray, Christopher J. L.

    2014-01-01

    Introduction Analysts estimating the costs or cost-effectiveness of health interventions requiring hospitalization often cut corners because they lack data and the costs of undertaking full step-down costing studies are high. They sometimes use the costs taken from a single hospital, sometimes use simple rules of thumb for allocating total hospital costs between general inpatient care and the outpatient department, and sometimes use the average cost of an inpatient bed-day instead of a ward-specific cost. Purpose In this paper we explore for the first time the extent and the causes of variation in ward-specific costs across hospitals, using data from China. We then use the resulting model to show how ward-specific costs for hospitals outside the data set could be estimated using information on the determinants identified in the paper. Methodology Ward-specific costs estimated using step-down costing methods from 41 hospitals in 12 provinces of China were used. We used seemingly unrelated regressions to identify the determinants of variability in the ratio of the costs of specific wards to that of the outpatient department, and explain how this can be used to generate ward-specific unit costs. Findings Ward-specific unit costs varied considerably across hospitals, ranging from 1 to 24 times the unit cost in the outpatient department — average unit costs are not a good proxy for costs at specialty wards in general. The most important sources of variability were the number of staff and the level of capacity utilization. Practice Implications More careful hospital costing studies are clearly needed. In the meantime, we have shown that in China it is possible to estimate ward-specific unit costs taking into account key determinants of variability in costs across wards. This might well be a better alternative than using simple rules of thumb or using estimates from a single study. PMID:24874566

  12. A multi-step pathway connecting short sleep duration to daytime somnolence, reduced attention, and poor academic performance: an exploratory cross-sectional study in teenagers.

    PubMed

    Perez-Lloret, Santiago; Videla, Alejandro J; Richaudeau, Alba; Vigo, Daniel; Rossi, Malco; Cardinali, Daniel P; Perez-Chada, Daniel

    2013-05-15

    A multi-step causality pathway connecting short sleep duration to daytime somnolence and sleepiness leading to reduced attention and poor academic performance as the final result can be envisaged. However this hypothesis has never been explored. To explore consecutive correlations between sleep duration, daytime somnolence, attention levels, and academic performance in a sample of school-aged teenagers. We carried out a survey assessing sleep duration and daytime somnolence using the Pediatric Daytime Sleepiness Scale (PDSS). Sleep duration variables included week-days' total sleep time, usual bedtimes, and absolute weekday to-weekend sleep time difference. Attention was assessed by d2 test and by the coding subtest from the WISC-IV scale. Academic performance was obtained from literature and math grades. Structural equation modeling was used to assess the independent relationships between these variables, while controlling for confounding effects of other variables, in one single model. Standardized regression weights (SWR) for relationships between these variables are reported. Study sample included 1,194 teenagers (mean age: 15 years; range: 13-17 y). Sleep duration was inversely associated with daytime somnolence (SWR = -0.36, p < 0.01) while sleepiness was negatively associated with attention (SWR = -0.13, p < 0.01). Attention scores correlated positively with academic results (SWR = 0.18, p < 0.01). Daytime somnolence correlated negatively with academic achievements (SWR = -0.16, p < 0.01). The model offered an acceptable fit according to usual measures (RMSEA = 0.0548, CFI = 0.874, NFI = 0.838). A Sobel test confirmed that short sleep duration influenced attention through daytime somnolence (p < 0.02), which in turn influenced academic achievements through reduced attention (p < 0.002). Poor academic achievements correlated with reduced attention, which in turn was related to daytime somnolence. Somnolence correlated with short sleep duration.

  13. A linear stepping endovascular intervention robot with variable stiffness and force sensing.

    PubMed

    He, Chengbin; Wang, Shuxin; Zuo, Siyang

    2018-05-01

    Robotic-assisted endovascular intervention surgery has attracted significant attention and interest in recent years. However, limited designs have focused on the variable stiffness mechanism of the catheter shaft. Flexible catheter needs to be partially switched to a rigid state that can hold its shape against external force to achieve a stable and effective insertion procedure. Furthermore, driving catheter in a similar way with manual procedures has the potential to make full use of the extensive experience from conventional catheter navigation. Besides driving method, force sensing is another significant factor for endovascular intervention. This paper presents a variable stiffness catheterization system that can provide stable and accurate endovascular intervention procedure with a linear stepping mechanism that has a similar operation mode to the conventional catheter navigation. A specially designed shape-memory polymer tube with water cooling structure is used to achieve variable stiffness of the catheter. Hence, four FBG sensors are attached to the catheter tip in order to monitor the tip contact force situation with temperature compensation. Experimental results show that the actuation unit is able to deliver linear and rotational motions. We have shown the feasibility of FBG force sensing to reduce the effect of temperature and detect the tip contact force. The designed catheter can change its stiffness partially, and the stiffness of the catheter can be remarkably increased in rigid state. Hence, in the rigid state, the catheter can hold its shape against a [Formula: see text] load. The prototype has also been validated with a vascular phantom, demonstrating the potential clinical value of the system. The proposed system provides important insights into the design of compact robotic-assisted catheter incorporating effective variable stiffness mechanism and real-time force sensing for intraoperative endovascular intervention.

  14. Using Equation-Free Computation to Accelerate Network-Free Stochastic Simulation of Chemical Kinetics.

    PubMed

    Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S

    2018-06-21

    The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.

  15. Multicentennial record of Labrador Sea primary productivity and sea-ice variability archived in coralline algal barium

    PubMed Central

    Chan, P.; Halfar, J.; Adey, W.; Hetzinger, S.; Zack, T.; Moore, G.W.K.; Wortmann, U. G.; Williams, B.; Hou, A.

    2017-01-01

    Accelerated warming and melting of Arctic sea-ice has been associated with significant increases in phytoplankton productivity in recent years. Here, utilizing a multiproxy approach, we reconstruct an annually resolved record of Labrador Sea productivity related to sea-ice variability in Labrador, Canada that extends well into the Little Ice Age (LIA; 1646 AD). Barium-to-calcium ratios (Ba/Ca) and carbon isotopes (δ13C) measured in long-lived coralline algae demonstrate significant correlations to both observational and proxy records of sea-ice variability, and show persistent patterns of co-variability broadly consistent with the timing and phasing of the Atlantic Multidecadal Oscillation (AMO). Results indicate reduced productivity in the Subarctic Northwest Atlantic associated with AMO cool phases during the LIA, followed by a step-wise increase from 1910 to present levels—unprecedented in the last 363 years. Increasing phytoplankton productivity is expected to fundamentally alter marine ecosystems as warming and freshening is projected to intensify over the coming century. PMID:28569839

  16. Variables influencing food perception reviewed for consumer-oriented product development.

    PubMed

    Sijtsema, Siet; Linnemann, Anita; van Gaasbeek, Ton; Dagevos, Hans; Jongen, Wim

    2002-01-01

    Consumer wishes have to be translated into product characteristics to implement consumer-oriented product development. Before this step can be made, insight in food-related behavior and perception of consumers is necessary to make the right, useful, and successful translation. Food choice behavior and consumers' perception are studied in many disciplines. Models of food behavior and preferences therefore were studied from a multidisciplinary perspective. Nearly all models structure the determinants related to the person, the food, and the environment. Consequently, the overview of models was used as a basis to structure the variables influencing food perception into a model for consumer-oriented product development. To this new model, referred to as food perception model, other variables like time and place as part of consumption moment were added. These are important variables influencing consumers' perception, and therefore of increasing importance to consumer-oriented product development nowadays. In further research, the presented food perception model is used as a tool to implement successful consumer-oriented product development.

  17. Using the Quantile Mapping to improve a weather generator

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Themessl, M.; Gobiet, A.

    2012-04-01

    We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.

  18. Accelerometric assessment of different dimensions of natural walking during the first year after stroke: Recovery of amount, distribution, quality and speed of walking.

    PubMed

    Sánchez, Marina Castel; Bussmann, Johannes; Janssen, Wim; Horemans, Herwin; Chastin, Sebastian; Heijenbrok, Majanka; Stam, Henk

    2015-09-01

    To describe the course of walking behaviour over a period of 1 year after stroke, using accelerometry, and to compare 1-year data with those from a healthy group. One-year follow-up cohort study. Twenty-three stroke patients and 20 age-matched healthy subjects. Accelerometer assessments were made in the participants' daily environment for 8 h/day during the 1st (T1), 12th (T2) and 48th (T3) weeks after stroke, and at one time-point in healthy subjects. Primary outcomes were: percentage of time walking and upright (amount); mean duration and number of walking periods (distribution); step regularity and gait symmetry (quality); and walking speed. Time walking, time upright, and number of walking bouts increased during T1 and T2 (p < 0.01) and then levelled off (p > 0.30). Mean duration of walking periods showed no significant improvements (p > 0.30) during all phases. Step regularity, gait symmetry and gait speed showed a tendency to increase consistently from T1 to T3. At T3, amount and distribution variables reached the level of the healthy group, but significant differences remained (p < 0.02) in step regularity and gait speed. In this cohort, different outcomes of walking behaviour showed different patterns and levels of recovery, which supports the multi-dimensional character of gait.

  19. Objective Assessment of Fall Risk in Parkinson's Disease Using a Body-Fixed Sensor Worn for 3 Days

    PubMed Central

    Weiss, Aner; Herman, Talia; Giladi, Nir; Hausdorff, Jeffrey M.

    2014-01-01

    Background Patients with Parkinson's disease (PD) suffer from a high fall risk. Previous approaches for evaluating fall risk are based on self-report or testing at a given time point and may, therefore, be insufficient to optimally capture fall risk. We tested, for the first time, whether metrics derived from 3 day continuous recordings are associated with fall risk in PD. Methods and Materials 107 patients (Hoehn & Yahr Stage: 2.6±0.7) wore a small, body-fixed sensor (3D accelerometer) on lower back for 3 days. Walking quantity (e.g., steps per 3-days) and quality (e.g., frequency-derived measures of gait variability) were determined. Subjects were classified as fallers or non-fallers based on fall history. Subjects were also followed for one year to evaluate predictors of the transition from non-faller to faller. Results The 3 day acceleration derived measures were significantly different in fallers and non-fallers and were significantly correlated with previously validated measures of fall risk. Walking quantity was similar in the two groups. In contrast, the fallers walked with higher step-to-step variability, e.g., anterior-posterior width of the dominant frequency was larger (p = 0.012) in the fallers (0.78±0.17 Hz) compared to the non-fallers (0.71±0.07 Hz). Among subjects who reported no falls in the year prior to testing, sensor-derived measures predicted the time to first fall (p = 0.0034), whereas many traditional measures did not. Cox regression analysis showed that anterior-posterior width was significantly (p = 0.0039) associated with time to fall during the follow-up period, even after adjusting for traditional measures. Conclusions/Significance These findings indicate that a body-fixed sensor worn continuously can evaluate fall risk in PD. This sensor-based approach was able to identify transition from non-faller to faller, whereas many traditional metrics were not successful. This approach may facilitate earlier detection of fall risk and may in the future, help reduce high costs associated with falls. PMID:24801889

  20. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  1. The geriatric depression scale and the timed up and go test predict fear of falling in community-dwelling elderly women with type 2 diabetes mellitus: a cross-sectional study.

    PubMed

    Moreira, Bruno de Souza; Dos Anjos, Daniela Maria da Cruz; Pereira, Daniele Sirineu; Sampaio, Rosana Ferreira; Pereira, Leani Souza Máximo; Dias, Rosângela Corrêa; Kirkwood, Renata Noce

    2016-03-03

    Fear of falling is a common and potentially disabling problem among older adults. However, little is known about this condition in older adults with diabetes mellitus. The aims of this study were to investigate the impact of the fear of falling on clinical, functional and gait variables in older women with type 2 diabetes and to identify which variables could predict the fear of falling in this population. Ninety-nine community-dwelling older women with type 2 diabetes (aged 65 to 89 years) were stratified in two groups based on their Falls Efficacy Scale-International score. Participants with a score < 23 were assigned to the group without the fear of falling (n = 50) and those with a score ≥ 23 were assigned to the group with the fear of falling (n = 49). Clinical data included demographics, anthropometrics, number of diseases and medications, physical activity level, fall history, frailty level, cognition, depressive symptoms, fasting glucose level and disease duration. Functional measures included the Timed Up and Go test (TUG), the five times sit-to-stand test (5-STS) and handgrip strength. Gait parameters were obtained using the GAITRite® system. Participants with a fear of falling were frailer and presented more depressive symptoms and worse performance on the TUG and 5-STS tests compared with those without a fear of falling. The group with the fear of falling also walked with a lower velocity, cadence and step length and increased step time and swing time variability. The multivariate regression analysis showed that the likelihood of having a fear of falling increased 1.34 times (OR 1.34, 95 % CI 1.11-1.61) for a one-point increase in the Geriatric Depression Scale (GDS-15) score and 1.36 times (OR 1.36, 95 % CI 1.07-1.73) for each second of increase in the TUG performance. The fear of falling in community-dwelling older women with type 2 diabetes mellitus is associated with frailty, depressive symptoms and dynamic balance, functional mobility and gait deficits. Furthermore, both the GDS-15 and the TUG test predict a fear of falling in this population. Therefore, these instruments should be considered during the assessment of diabetic older women with fear of falling.

  2. Numerical modeling of solar irradiance on earth's surface

    NASA Astrophysics Data System (ADS)

    Mera, E.; Gutierez, L.; Da Silva, L.; Miranda, E.

    2016-05-01

    Modeling studies and estimation of solar radiation in base area, touch from the problems of estimating equation of time, distance equation solar space, solar declination, calculation of surface irradiance, considering that there are a lot of studies you reported the inability of these theoretical equations to be accurate estimates of radiation, many authors have proceeded to make corrections through calibrations with Pyranometers field (solarimeters) or the use of satellites, this being very poor technique last because there a differentiation between radiation and radiant kinetic effects. Because of the above and considering that there is a weather station properly calibrated ground in the Susques Salar in the Jujuy Province, Republic of Argentina, proceeded to make the following modeling of the variable in question, it proceeded to perform the following process: 1. Theoretical Modeling, 2. graphic study of the theoretical and actual data, 3. Adjust primary calibration data through data segmentation on an hourly basis, through horizontal and adding asymptotic constant, 4. Analysis of scatter plot and contrast series. Based on the above steps, the modeling data obtained: Step One: Theoretical data were generated, Step Two: The theoretical data moved 5 hours, Step Three: an asymptote of all negative emissivity values applied, Solve Excel algorithm was applied to least squares minimization between actual and modeled values, obtaining new values of asymptotes with the corresponding theoretical reformulation of data. Add a constant value by month, over time range set (4:00 pm to 6:00 pm). Step Four: The modeling equation coefficients had monthly correlation between actual and theoretical data ranging from 0.7 to 0.9.

  3. Elite sprinting: are athletes individually step-frequency or step-length reliant?

    PubMed

    Salo, Aki I T; Bezodis, Ian N; Batterham, Alan M; Kerwin, David G

    2011-06-01

    The aim of this study was to investigate the step characteristics among the very best 100-m sprinters in the world to understand whether the elite athletes are individually more reliant on step frequency (SF) or step length (SL). A total of 52 male elite-level 100-m races were recorded from publicly available television broadcasts, with 11 analyzed athletes performing in 10 or more races. For each run of each athlete, the average SF and SL over the whole 100-m distance was analyzed. To determine any SF or SL reliance for an individual athlete, the 90% confidence interval (CI) for the difference between the SF-time versus SL-time relationships was derived using a criterion nonparametric bootstrapping technique. Athletes performed these races with various combinations of SF and SL reliance. Athlete A10 yielded the highest positive CI difference (SL reliance), with a value of 1.05 (CI = 0.50-1.53). The largest negative difference (SF reliance) occurred for athlete A11 as -0.60, with the CI range of -1.20 to 0.03. Previous studies have generally identified only one of these variables to be the main reason for faster running velocities. However, this study showed that there is a large variation of performance patterns among the elite athletes and, overall, SF or SL reliance is a highly individual occurrence. It is proposed that athletes should take this reliance into account in their training, with SF-reliant athletes needing to keep their neural system ready for fast leg turnover and SL-reliant athletes requiring more concentration on maintaining strength levels.

  4. Predicting Dynamic Postural Instability Using Center of Mass Time-to-Contact Information

    PubMed Central

    Hasson, Christopher J.; Van Emmerik, Richard E.A.; Caldwell, Graham E.

    2008-01-01

    Our purpose was to determine whether spatiotemporal measures of center of mass motion relative to the base of support boundary could predict stepping strategies after upper-body postural perturbations in humans. We expected that inclusion of center of mass acceleration in such time-to-contact (TtC) calculations would give better predictions and more advanced warning of perturbation severity. TtC measures were compared with traditional postural variables, which don’t consider support boundaries, and with an inverted pendulum model of dynamic stability developed by Hof et al. (2005). A pendulum was used to deliver sequentially increasing perturbations to 10 young adults, who were strapped to a wooden backboard that constrained motion to sagittal plane rotation about the ankle joint. Subjects were instructed to resist the perturbations, stepping only if necessary to prevent a fall. Peak center of mass and center of pressure velocity and acceleration demonstrated linear increases with postural challenge. In contrast, boundary relevant minimum TtC values decreased nonlinearly with postural challenge, enabling prediction of stepping responses using quadratic equations. When TtC calculations incorporated center of mass acceleration, the quadratic fits were better and gave more accurate predictions of the TtC values that would trigger stepping responses. In addition, TtC minima occurred earlier with acceleration inclusion, giving more advanced warning of perturbation severity. Our results were in agreement with TtC predictions based on Hof’s model, and suggest that TtC may function as a control parameter, influencing the postural control system’s decision to transition from a stationary base of support to a stepping strategy. PMID:18556003

  5. Music and metronome cues produce different effects on gait spatiotemporal measures but not gait variability in healthy older adults.

    PubMed

    Wittwer, Joanne E; Webster, Kate E; Hill, Keith

    2013-02-01

    Rhythmic auditory cues including music and metronome beats have been used, sometimes interchangeably, to improve disordered gait arising from a range of clinical conditions. There has been limited investigation into whether there are optimal cue types. Different cue types have produced inconsistent effects across groups which differed in both age and clinical condition. The possible effect of normal ageing on response to different cue types has not been reported for gait. The aim of this study was to determine the effects of both rhythmic music and metronome cues on gait spatiotemporal measures (including variability) in healthy older people. Twelve women and seven men (>65 years) walked on an instrumented walkway at comfortable pace and then in time to each of rhythmic music and metronome cues at comfortable pace stepping frequency. Music but not metronome cues produced a significant increase in group mean gait velocity of 4.6 cm/s, due mostly to a significant increase in group mean stride length of 3.1cm. Both cue types produced a significant but small increase in cadence of 1 step/min. Mean spatio-temporal variability was low at baseline and did not increase with either cue type suggesting cues did not disrupt gait timing. Study findings suggest music and metronome cues may not be used interchangeably and cue type as well as frequency should be considered when evaluating effects of rhythmic auditory cueing on gait. Further work is required to determine whether optimal cue types and frequencies to improve walking in different clinical groups can be identified. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Acute Effects of Plyometric Intervention—Performance Improvement and Related Changes in Sprinting Gait Variability.

    PubMed

    Maćkała, Krzysztof; Fostiak, Marek

    2015-07-01

    The purpose of this study was to examine the effect of a short high-intensity plyometric program on the improvement of explosive power of lower extremities and sprint performance as well as changes in sprinting stride variability in male sprinters. Fourteen healthy male sprinters (mean ± SD: age: 18.07 ± 0.73 years, body mass: 73 ± 9.14 kg, height: 180.57 ± 8.16 cm, and best 100 m: 10.89 ± 0.23) participated in the experiment. The experimental protocol included vertical jumping such as squat jump, countermovement jump, and horizontal jumps; standing long jump and standing triple jumps to assess lower-body power, maximal running velocity; a 20-m flying start sprint that evaluated variability of 10 running steps and 60-m starting block sprint. All analyzed parameters were obtained using the new technology of OptoJump-Microgate (OptoJump, Italy). The short-term plyometric training program significantly increased the explosive power of lower extremities, both vertical and horizontal jumping improvement. However, the vertical jumps increased much more than the horizontal. The 20-m improvements were derived from an increase of stride frequency from 4.31 to 4.39 Hz because of a decrease of ground contact time from 138 to 133 milliseconds. This did not translate into step length changes. Therefore, the significantly increased frequency of stride (1.8%), which is a specific expression of ground contact time reduction during support phase, resulted in an increase of speed. The training volume of 2 weeks (with 6 sessions) using high-intensity (between 180 and 250 jumps per session) plyometric exercises can be recommended as the short-term strategy that will optimize one's probability of reaching strong improvements in explosive power and sprint velocity performance.

  7. Assessment of stability during gait in patients with spinal deformity-A preliminary analysis using the dynamic stability margin.

    PubMed

    Simon, Anne-Laure; Lugade, Vipul; Bernhardt, Kathie; Larson, A Noelle; Kaufman, Kenton

    2017-06-01

    Daily living activities are dynamic, requiring spinal motion through space. Current assessment of spinal deformities is based on static measurements from full-spine standing radiographs. Tools to assess dynamic stability during gait might be useful to enhance the standard evaluation. The aim of this study was to evaluate gait dynamic imbalance in patients with spinal deformity using the dynamic stability margin (DSM). Twelve normal subjects and 17 patients with spinal deformity were prospectively recruited. A kinematic 3D gait analysis was performed for the control group (CG) and the spinal deformity group (SDG). The DSM (distance between the extrapolated center of mass and the base of support) and time-distance parameters were calculated for the right and left side during gait. The relationship between DSM and step length was assessed using three variables: gait stability, symmetry, and consistency. Variables' accuracy was validated by a discriminant analysis. Patients with spinal deformity exhibited gait instability according to the DSM (0.25m versus 0.31m) with decreased velocity (1.1ms -1 versus 1.3ms -1 ) and decreased step length (0.32m versus 0.38m). According to the discriminant analysis, gait stability was the more accurate variable (area under the curve AUC=0.98) followed by gait symmetry and consistency. However, gait consistency showed 100% of specificity, sensitivity, and accuracy of precision. The DSM showed that patients with spinal malalignment exhibit decreased gait stability, symmetry, and consistency besides gait time-distance parameter changes. Additional work is required to determine how to apply the DSM for preoperative and postoperative spinal deformity management. Copyright © 2017. Published by Elsevier B.V.

  8. Measurement of circulating cell-derived microparticles by flow cytometry: sources of variability within the assay.

    PubMed

    Ayers, Lisa; Kohler, Malcolm; Harrison, Paul; Sargent, Ian; Dragovic, Rebecca; Schaap, Marianne; Nieuwland, Rienk; Brooks, Susan A; Ferry, Berne

    2011-04-01

    Circulating cell-derived microparticles (MPs) have been implicated in several disease processes and elevated levels are found in many pathological conditions. The detection and accurate measurement of MPs, although attracting widespread interest, is hampered by a lack of standardisation. The aim of this study was to establish a reliable flow cytometric assay to measure distinct subtypes of MPs in disease and to identify any significant causes of variability in MP quantification. Circulating MPs within plasma were identified by their phenotype (platelet, endothelial, leukocyte and annexin-V positivity (AnnV+). The influence of key variables (i.e. time between venepuncture and centrifugation, washing steps, the number of centrifugation steps, freezing/long-term storage and temperature of thawing) on MP measurement were investigated. Increasing time between venepuncture and centrifugation leads to increased MP levels. Washing samples results in decreased AnnV+MPs (P=0.002) and platelet-derived MPs (PMPs) (P=0.002). Double centrifugation of MPs prior to freezing decreases numbers of AnnV+MPs (P=0.0004) and PMPs (P=0.0004). A single freeze thaw cycle of samples led to an increase in AnnV+MPs (P=0.0020) and PMPs (P=0.0039). Long-term storage of MP samples at -80° resulted in decreased MP levels. This study found that minor protocol changes significantly affected MP levels. This is one of the first studies attempting to standardise a method for obtaining and measuring circulating MPs. Standardisation will be essential for successful development of MP technologies, allowing direct comparison of results between studies and leading to a greater understanding of MPs in disease. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  9. Variable-mesh method of solving differential equations

    NASA Technical Reports Server (NTRS)

    Van Wyk, R.

    1969-01-01

    Multistep predictor-corrector method for numerical solution of ordinary differential equations retains high local accuracy and convergence properties. In addition, the method was developed in a form conducive to the generation of effective criteria for the selection of subsequent step sizes in step-by-step solution of differential equations.

  10. Flow resistance dynamics in step‐pool stream channels: 1. Large woody debris and controls on total resistance

    USGS Publications Warehouse

    Wilcox, Andrew C.; Wohl, Ellen E.

    2006-01-01

    Flow resistance dynamics in step‐pool channels were investigated through physical modeling using a laboratory flume. Variables contributing to flow resistance in step‐pool channels were manipulated in order to measure the effects of various large woody debris (LWD) configurations, steps, grains, discharge, and slope on total flow resistance. This entailed nearly 400 flume runs, organized into a series of factorial experiments. Factorial analyses of variance indicated significant two‐way and three‐way interaction effects between steps, grains, and LWD, illustrating the complexity of flow resistance in these channels. Interactions between steps and LWD resulted in substantially greater flow resistance for steps with LWD than for steps lacking LWD. LWD position contributed to these interactions, whereby LWD pieces located near the lip of steps, analogous to step‐forming debris in natural channels, increased the effective height of steps and created substantially higher flow resistance than pieces located farther upstream on step treads. Step geometry and LWD density and orientation also had highly significant effects on flow resistance. Flow resistance dynamics and the resistance effect of bed roughness configurations were strongly discharge‐dependent; discharge had both highly significant main effects on resistance and highly significant interactions with all other variables.

  11. Time-Series Monitoring of Open Star Clusters

    NASA Astrophysics Data System (ADS)

    Hojaev, A. S.; Semakov, D. G.

    2006-08-01

    Star clusters especially a compact ones (with diameter of few to ten arcmin) are suitable targets to search of light variability for orchestera of stars by means of ordinary Casegrain telescope plus CCD system. A special patroling with short time-fixed exposures and mmag accuracy could be used also to study of stellar oscillation for group of stars simultaneously. The last can be carried out both separately from one site and within international campaigns. Detection and study of optical variability of X-ray sources including X-ray binaries with compact objects might be as a result of a long-term monitoring of such clusters as well. We present the program of open star clusters monitoring with Zeiss 1 meter RCC telescope of Maidanak observatory has been recently automated. In combination with quite good seeing at this observatory (see, e.g., Sarazin, M. 1999, URL http://www.eso.org/gen-fac/pubs/astclim/) the automatic telescope equipped with large-format (2KX2K) CCD camera AP-10 available will allow to collect homogenious time-series for analysis. We already started this program in 2001 and had a set of patrol observations with Zeiss 0.6 meter telescope and AP-10 camera in 2003. 7 compact open clusters in the Milky Way (NGC 7801, King1, King 13, King18, King20, Berkeley 55, IC 4996) have been monitored for stellar variability and some results of photometry will be presented. A few interesting variables were discovered and dozens were suspected for variability to the moment in these clusters for the first time. We have made steps to join the Whole-Earth Telescope effort in its future campaigns.

  12. Revealing structure and evolution within the corona of the Seyfert galaxy I Zw 1

    NASA Astrophysics Data System (ADS)

    Wilkins, D. R.; Gallo, L. C.; Silva, C. V.; Costantini, E.; Brandt, W. N.; Kriss, G. A.

    2017-11-01

    X-ray spectral timing analysis is presented of XMM-Newton observations of the narrow-line Seyfert 1 galaxy I Zwicky 1 taken in 2015 January. After exploring the effect of background flaring on timing analyses, X-ray time lags between the reflection-dominated 0.3-1.0 keV energy and continuum-dominated 1.0-4.0 keV band are measured, indicative of reverberation off the inner accretion disc. The reverberation lag time is seen to vary as a step function in frequency; across lower frequency components of the variability, 3 × 10-4-1.2 × 10-3 Hz a lag of 160 s is measured, but the lag shortens to (59 ± 4) s above 1.2 × 10-3 Hz. The lag-energy spectrum reveals differing profiles between these ranges with a change in the dip showing the earliest arriving photons. The low-frequency signal indicates reverberation of X-rays emitted from a corona extended at low height over the disc, while at high frequencies, variability is generated in a collimated core of the corona through which luminosity fluctuations propagate upwards. Principal component analysis of the variability supports this interpretation, showing uncorrelated variation in the spectral slope of two power-law continuum components. The distinct evolution of the two components of the corona is seen as a flare passes inwards from the extended to the collimated portion. An increase in variability in the extended corona was found preceding the initial increase in X-ray flux. Variability from the extended corona was seen to die away as the flare passed into the collimated core leading to a second sharper increase in the X-ray count rate.

  13. A database of 10 min average measurements of solar radiation and meteorological variables in Ostrava, Czech Republic

    NASA Astrophysics Data System (ADS)

    Opálková, Marie; Navrátil, Martin; Špunda, Vladimír; Blanc, Philippe; Wald, Lucien

    2018-04-01

    A database containing 10 min means of solar irradiance measured on a horizontal plane in several ultraviolet and visible bands from July 2014 to December 2016 at three stations in the area of the city of Ostrava (Czech Republic) is presented. The database contains time series of 10 min average irradiances or photosynthetic photon flux densities measured in the following spectral bands: 280-315 nm (UVB); 315-380 nm (UVA); and 400-700 nm (photosynthetically active radiation, PAR); 510-700 nm; 600-700 nm; 610-680 nm; 690-780 nm; 400-1100 nm. A series of meteorological variables including relative air humidity and air temperature at surface is also provided at the same 10 min time step at all three stations, and precipitation is provided for two stations. Air pressure, wind speed, wind direction, and concentrations of air pollutants PM10, SO2, NOx, NO, NO2 were measured at the 1 h time step at the fourth station owned by the Public Health Institute of Ostrava. The details of the experimental sites and instruments used for the measurements are given. Special attention is given to the data quality, and the original approach to the data quality which was established is described in detail. About 130 000 records for each of the three stations are available in the database. This database offers a unique ensemble of variables having a high temporal resolution and it is a reliable source for radiation in relation to environment and vegetation in highly polluted areas of industrial cities in the of northern mid-latitudes. The database has been placed on the PANGAEA repository (https://doi.org/10.1594/PANGAEA.879722) and contains individual data files for each station.

  14. A Multi-Scale Distribution Model for Non-Equilibrium Populations Suggests Resource Limitation in an Endangered Rodent

    PubMed Central

    Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.

    2014-01-01

    Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807

  15. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.

  16. United States Forest Disturbance Trends Observed Using Landsat Time Series

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.; Goward, Samuel N.; Kennedy, Robert E.; Cohen, Warren B.; Moisen, Gretchen G.; Schleeweis, Karen; Huang, Chengquan

    2013-01-01

    Disturbance events strongly affect the composition, structure, and function of forest ecosystems; however, existing U.S. land management inventories were not designed to monitor disturbance. To begin addressing this gap, the North American Forest Dynamics (NAFD) project has examined a geographic sample of 50 Landsat satellite image time series to assess trends in forest disturbance across the conterminous United States for 1985-2005. The geographic sample design used a probability-based scheme to encompass major forest types and maximize geographic dispersion. For each sample location disturbance was identified in the Landsat series using the Vegetation Change Tracker (VCT) algorithm. The NAFD analysis indicates that, on average, 2.77 Mha/yr of forests were disturbed annually, representing 1.09%/yr of US forestland. These satellite-based national disturbance rates estimates tend to be lower than those derived from land management inventories, reflecting both methodological and definitional differences. In particular the VCT approach used with a biennial time step has limited sensitivity to low-intensity disturbances. Unlike prior satellite studies, our biennial forest disturbance rates vary by nearly a factor of two between high and low years. High western US disturbance rates were associated with active fire years and insect activity, while variability in the east is more strongly related to harvest rates in managed forests. We note that generating a geographic sample based on representing forest type and variability may be problematic since the spatial pattern of disturbance does not necessarily correlate with forest type. We also find that the prevalence of diffuse, non-stand clearing disturbance in US forests makes the application of a biennial geographic sample problematic. Future satellite-based studies of disturbance at regional and national scales should focus on wall-to-wall analyses with annual time step for improved accuracy.

  17. Combined effect of whole-body vibration and ambient lighting on human discomfort, heart rate, and reaction time.

    PubMed

    Monazzam, Mohammad Reza; Shoja, Esmaeil; Zakerian, Seyed Abolfazl; Foroushani, Abbas Rahimi; Shoja, Mohsen; Gharaee, Masoumeh; Asgari, Amin

    2018-07-01

    This study aimed to investigate the effect of whole-body vibration and ambient lighting, as well as their combined effect on human discomfort, heart rate, and reaction time in laboratory conditions. 44 men were recruited with an average age of 25.4 ± 1.9 years. Each participant was subjected to 12 experimental steps, each step lasting five minutes for four different vibration accelerations in X, Y, and Z axes at a fixed frequency; three different lighting intensities of 50, 500, and 1000 lx were also considered. At each step, a visual computerized reaction test was taken from subjects and their heart rate recorded by pulse oximeter. In addition, the discomfort rate of subjects was measured using Borg scale. Increasing vibration acceleration significantly increased the discomfort rate and heart beat but not the reaction time. Lack of lighting caused more discomfort in the subjects, but there was no significant correlation between lighting intensity with heart rate and reaction time. The results also showed that the combined effect of vibration and lighting had no significant effect on any of the discomfort, heart rate, and reaction time variables. Whole-body vibration is an important factor in the development of human subjective and physiological reactions compared to lighting. Therefore, consideration of the level of vibration to which an individual is exposed in workplaces subject to vibration plays an important role in reducing the level of human discomfort, but its interaction with ambient lighting does not have a significant effect on human subjective and physiological responses.

  18. Hilbert-Huang spectral analysis for characterizing the intrinsic time-scales of variability in decennial time-series of surface solar radiation

    NASA Astrophysics Data System (ADS)

    Bengulescu, Marc; Blanc, Philippe; Wald, Lucien

    2016-04-01

    An analysis of the variability of the surface solar irradiance (SSI) at different local time-scales is presented in this study. Since geophysical signals, such as long-term measurements of the SSI, are often produced by the non-linear interaction of deterministic physical processes that may also be under the influence of non-stationary external forcings, the Hilbert-Huang transform (HHT), an adaptive, noise-assisted, data-driven technique, is employed to extract locally - in time and in space - the embedded intrinsic scales at which a signal oscillates. The transform consists of two distinct steps. First, by means of the Empirical Mode Decomposition (EMD), the time-series is "de-constructed" into a finite number - often small - of zero-mean components that have distinct temporal scales of variability, termed hereinafter the Intrinsic Mode Functions (IMFs). The signal model of the components is an amplitude modulation - frequency modulation (AM - FM) one, and can also be thought of as an extension of a Fourier series having both time varying amplitude and frequency. Following the decomposition, Hilbert spectral analysis is then employed on the IMFs, yielding a time-frequency-energy representation that portrays changes in the spectral contents of the original data, with respect to time. As measurements of surface solar irradiance may possibly be contaminated by the manifestation of different type of stochastic processes (i.e. noise), the identification of real, physical processes from this background of random fluctuations is of interest. To this end, an adaptive background noise null hypothesis is assumed, based on the robust statistical properties of the EMD when applied to time-series of different classes of noise (e.g. white, red or fractional Gaussian). Since the algorithm acts as an efficient constant-Q dyadic, "wavelet-like", filter bank, the different noise inputs are decomposed into components having the same spectral shape, but that are translated to the next lower octave in the spectral domain. Thus, when the sampling step is increased, the spectral shape of IMFs cannot remain at its original position, due to the new lower Nyquist frequency, and is instead pushed toward the lower scaled frequency. Based on these features, the identification of potential signals within the data should become possible without any prior knowledge of the background noises. When applying the above outlined procedure to decennial time-series of surface solar irradiance, only the component that has an annual time-scale of variability is shown to have statistical properties that diverge from those of noise. Nevertheless, the noise-like components are not completely devoid of information, as it is found that their AM components have a non-null rank correlation coefficient with the annual mode, i.e. the background noise intensity seems to be modulated by the seasonal cycle. The findings have possible implications on the modelling and forecast of the surface solar irradiance, by discriminating its deterministic from its quasi-stochastic constituents, at distinct local time-scales.

  19. An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padgett, Jill M. A.; Ilie, Silvana, E-mail: silvana@ryerson.ca

    2016-03-15

    Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating themore » solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.« less

  20. An oilspill trajectory analysis model with a variable wind deflection angle

    USGS Publications Warehouse

    Samuels, W.B.; Huang, N.E.; Amstutz, D.E.

    1982-01-01

    The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.

  1. A Viscoelastic Hybrid Shell Finite Element

    NASA Technical Reports Server (NTRS)

    Johnson, Arthur

    1999-01-01

    An elastic large displacement thick-shell hybrid finite element is modified to allow for the calculation of viscoelastic stresses. Internal strain variables are introduced at he element's stress nodes and are employed to construct a viscous material model. First order ordinary differential equations relate the internal strain variables to the corresponding elastic strains at the stress nodes. The viscous stresses are computed from the internal strain variables using viscous moduli which are a fraction of the elastic moduli. The energy dissipated by the action of the viscous stresses in included in the mixed variational functional. Nonlinear quasi-static viscous equilibrium equations are then obtained. Previously developed Taylor expansions of the equilibrium equations are modified to include the viscous terms. A predictor-corrector time marching solution algorithm is employed to solve the algebraic-differential equations. The viscous shell element is employed to numerically simulate a stair-step loading and unloading of an aircraft tire in contact with a frictionless surface.

  2. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  3. Variable selection with stepwise and best subset approaches

    PubMed Central

    2016-01-01

    While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786

  4. Crop status evaluations and yield predictions

    NASA Technical Reports Server (NTRS)

    Haun, J. R.

    1975-01-01

    A model was developed for predicting the day 50 percent of the wheat crop is planted in North Dakota. This model incorporates location as an independent variable. The Julian date when 50 percent of the crop was planted for the nine divisions of North Dakota for seven years was regressed on the 49 variables through the step-down multiple regression procedure. This procedure begins with all of the independent variables and sequentially removes variables that are below a predetermined level of significance after each step. The prediction equation was tested on daily data. The accuracy of the model is considered satisfactory for finding the historic dates on which to initiate yield prediction model. Growth prediction models were also developed for spring wheat.

  5. Expected values for pedometer-determined physical activity in older populations

    PubMed Central

    2009-01-01

    The purpose of this review is to update expected values for pedometer-determined physical activity in free-living healthy older populations. A search of the literature published since 2001 began with a keyword (pedometer, "step counter," "step activity monitor" or "accelerometer AND steps/day") search of PubMed, Cumulative Index to Nursing & Allied Health Literature (CINAHL), SportDiscus, and PsychInfo. An iterative process was then undertaken to abstract and verify studies of pedometer-determined physical activity (captured in terms of steps taken; distance only was not accepted) in free-living adult populations described as ≥ 50 years of age (studies that included samples which spanned this threshold were not included unless they provided at least some appropriately age-stratified data) and not specifically recruited based on any chronic disease or disability. We identified 28 studies representing at least 1,343 males and 3,098 females ranging in age from 50–94 years. Eighteen (or 64%) of the studies clearly identified using a Yamax pedometer model. Monitoring frames ranged from 3 days to 1 year; the modal length of time was 7 days (17 studies, or 61%). Mean pedometer-determined physical activity ranged from 2,015 steps/day to 8,938 steps/day. In those studies reporting such data, consistent patterns emerged: males generally took more steps/day than similarly aged females, steps/day decreased across study-specific age groupings, and BMI-defined normal weight individuals took more steps/day than overweight/obese older adults. The range of 2,000–9,000 steps/day likely reflects the true variability of physical activity behaviors in older populations. More explicit patterns, for example sex- and age-specific relationships, remain to be informed by future research endeavors. PMID:19706192

  6. Angular measurements of the dynein ring reveal a stepping mechanism dependent on a flexible stalk

    PubMed Central

    Lippert, Lisa G.; Dadosh, Tali; Hadden, Jodi A.; Karnawat, Vishakha; Diroll, Benjamin T.; Murray, Christopher B.; Holzbaur, Erika L. F.; Schulten, Klaus; Reck-Peterson, Samara L.; Goldman, Yale E.

    2017-01-01

    The force-generating mechanism of dynein differs from the force-generating mechanisms of other cytoskeletal motors. To examine the structural dynamics of dynein’s stepping mechanism in real time, we used polarized total internal reflection fluorescence microscopy with nanometer accuracy localization to track the orientation and position of single motors. By measuring the polarized emission of individual quantum nanorods coupled to the dynein ring, we determined the angular position of the ring and found that it rotates relative to the microtubule (MT) while walking. Surprisingly, the observed rotations were small, averaging only 8.3°, and were only weakly correlated with steps. Measurements at two independent labeling positions on opposite sides of the ring showed similar small rotations. Our results are inconsistent with a classic power-stroke mechanism, and instead support a flexible stalk model in which interhead strain rotates the rings through bending and hinging of the stalk. Mechanical compliances of the stalk and hinge determined based on a 3.3-μs molecular dynamics simulation account for the degree of ring rotation observed experimentally. Together, these observations demonstrate that the stepping mechanism of dynein is fundamentally different from the stepping mechanisms of other well-studied MT motors, because it is characterized by constant small-scale fluctuations of a large but flexible structure fully consistent with the variable stepping pattern observed as dynein moves along the MT. PMID:28533393

  7. Does a microprocessor-controlled prosthetic knee affect stair ascent strategies in persons with transfemoral amputation?

    PubMed

    Aldridge Whitehead, Jennifer M; Wolf, Erik J; Scoville, Charles R; Wilken, Jason M

    2014-10-01

    Stair ascent can be difficult for individuals with transfemoral amputation because of the loss of knee function. Most individuals with transfemoral amputation use either a step-to-step (nonreciprocal, advancing one stair at a time) or skip-step strategy (nonreciprocal, advancing two stairs at a time), rather than a step-over-step (reciprocal) strategy, because step-to-step and skip-step allow the leading intact limb to do the majority of work. A new microprocessor-controlled knee (Ottobock X2(®)) uses flexion/extension resistance to allow step-over-step stair ascent. We compared self-selected stair ascent strategies between conventional and X2(®) prosthetic knees, examined between-limb differences, and differentiated stair ascent mechanics between X2(®) users and individuals without amputation. We also determined which factors are associated with differences in knee position during initial contact and swing within X2(®) users. Fourteen individuals with transfemoral amputation participated in stair ascent sessions while using conventional and X2(®) knees. Ten individuals without amputation also completed a stair ascent session. Lower-extremity stair ascent joint angles, moment, and powers and ground reaction forces were calculated using inverse dynamics during self-selected strategy and cadence and controlled cadence using a step-over-step strategy. One individual with amputation self-selected a step-over-step strategy while using a conventional knee, while 10 individuals self-selected a step-over-step strategy while using X2(®) knees. Individuals with amputation used greater prosthetic knee flexion during initial contact (32.5°, p = 0.003) and swing (68.2°, p = 0.001) with higher intersubject variability while using X2(®) knees compared to conventional knees (initial contact: 1.6°, swing: 6.2°). The increased prosthetic knee flexion while using X2(®) knees normalized knee kinematics to individuals without amputation during swing (88.4°, p = 0.179) but not during initial contact (65.7°, p = 0.002). Prosthetic knee flexion during initial contact and swing were positively correlated with prosthetic limb hip power during pull-up (r = 0.641, p = 0.046) and push-up/early swing (r = 0.993, p < 0.001), respectively. Participants with transfemoral amputation were more likely to self-select a step-over-step strategy similar to individuals without amputation while using X2(®) knees than conventional prostheses. Additionally, the increased prosthetic knee flexion used with X2(®) knees placed large power demands on the hip during pull-up and push-up/early swing. A modified strategy that uses less knee flexion can be used to allow step-over-step ascent in individuals with less hip strength.

  8. Kinematic, muscular, and metabolic responses during exoskeletal-, elliptical-, or therapist-assisted stepping in people with incomplete spinal cord injury.

    PubMed

    Hornby, T George; Kinnaird, Catherine R; Holleran, Carey L; Rafferty, Miriam R; Rodriguez, Kelly S; Cain, Julie B

    2012-10-01

    Robotic-assisted locomotor training has demonstrated some efficacy in individuals with neurological injury and is slowly gaining clinical acceptance. Both exoskeletal devices, which control individual joint movements, and elliptical devices, which control endpoint trajectories, have been utilized with specific patient populations and are available commercially. No studies have directly compared training efficacy or patient performance during stepping between devices. The purpose of this study was to evaluate kinematic, electromyographic (EMG), and metabolic responses during elliptical- and exoskeletal-assisted stepping in individuals with incomplete spinal cord injury (SCI) compared with therapist-assisted stepping. Design A prospective, cross-sectional, repeated-measures design was used. Participants with incomplete SCI (n=11) performed 3 separate bouts of exoskeletal-, elliptical-, or therapist-assisted stepping. Unilateral hip and knee sagittal-plane kinematics, lower-limb EMG recordings, and oxygen consumption were compared across stepping conditions and with control participants (n=10) during treadmill stepping. Exoskeletal stepping kinematics closely approximated normal gait patterns, whereas significantly greater hip and knee flexion postures were observed during elliptical-assisted stepping. Measures of kinematic variability indicated consistent patterns in control participants and during exoskeletal-assisted stepping, whereas therapist- and elliptical-assisted stepping kinematics were more variable. Despite specific differences, EMG patterns generally were similar across stepping conditions in the participants with SCI. In contrast, oxygen consumption was consistently greater during therapist-assisted stepping. Limitations Limitations included a small sample size, lack of ability to evaluate kinetics during stepping, unilateral EMG recordings, and sagittal-plane kinematics. Despite specific differences in kinematics and EMG activity, metabolic activity was similar during stepping in each robotic device. Understanding potential differences and similarities in stepping performance with robotic assistance may be important in delivery of repeated locomotor training using robotic or therapist assistance and for consumers of robotic devices.

  9. The detailed measurement of foot clearance by young adults during stair descent.

    PubMed

    Telonio, A; Blanchet, S; Maganaris, C N; Baltzopoulos, V; McFadyen, B J

    2013-04-26

    Foot clearance is an important variable for understanding safe stair negotiation, but few studies have provided detailed measures of it. This paper presents a new method to calculate minimal shoe clearance during stair descent and compares it to previous literature. Seventeen healthy young subjects descended a five step staircase with step treads of 300 mm and step heights of 188 mm. Kinematic data were collected with an Optotrak system (model 3020) and three non-colinear infrared markers on the feet. Ninety points were digitized on the foot sole prior to data collection using a 6 marker probe and related to the triad of markers on the foot. The foot sole was reconstructed using the Matlab (version 7.0) "meshgrid" function and minimal distance to each step edge was calculated for the heel, toe and foot sole. Results showed significant differences in minimum clearance between sole, heel and toe, with the shoe sole being the closest and the toe the furthest. While the hind foot sole was closest for 69% of the time, the actual minimum clearance point on the sole did vary across subjects and staircase steps. This new method, and the findings on healthy young subjects, can be applied to future studies of other populations and staircase dimensions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Why risk managers need information about spatio-temporal variability of natural hazards. Examples from practice

    NASA Astrophysics Data System (ADS)

    Zischg, Andreas

    2013-04-01

    Integrated risk management consists of risk prevention, early warning, intervention during an event and restoration/re-construction after an event. The prevention phase consists of land use planning measures with a long-term time horizon and of structural measures that sometimes have a lifespan of more than 30-50 years. In this case, it is important to analyse the long-term evolvement of natural risks due to climate changes or land use changes. Besides of this, the spatial and temporal variability of a natural hazard process during the course of an event is also important. The shift from "static" hazard and risk assessment towards a "dynamic" assessment offers benefits for improving the intervention phase in risk management. This contribution describes some examples and points out the benefits of this shift for risk management. One example is the variable disposition of small alpine catchments for runoff and its relevance for early warning. The disposition for runoff depends on the actual status of environmental variables such as soil moisture and the snowpack characteristics. A feasibility study showed how the monitoring of soil moisture and the status of the snowpack can be incorporated into a rule base for describing the temporal variability of the disposition for high runoff in alpine catchments. The study showed that this information about the system state of alpine catchments can be used to improve the assessment of the consequences of a weather forecast for risk management. Another example is the use of snowpack and weather monitoring and traffic intensity measurements for avalanche risk management on alpine roads. Here, the information about the spatio-temporal variability of the snow avalanches and the presence of vehicles can be used for improving the procedures for road closure and re-opening. Another example is the preparation of intervention plans for fire brigades and other relief units during urban floods. The simulation of the temporal evolvement of a single flood event (time horizon of 0-24 hours) provides information for the elaboration of the intervention tactic. The following questions can be answered only by knowing the temporal and spatial evolvement during an event itself: Which intervention priorities have to be set if the resources of the relief units are limited? Which early interventions could be turn out to be unhelpful because in a later step the object to be protected will be flooded anyway? What is the time available for setting up object protection measures and other flood protection measures? The most important factor to implement the theory in practice is the focus on the interlinkages between the simulation of all possible scenarios in advance (scenario techniques, analysing the time-steps in flood simulation), the monitoring system (now-casting, real-time-data), the scenarios of intervention measures and their interdependency with the hazard scenarios. The interlinkages can be set up and described with the expert system approach.

  11. On cat's eyes and multiple disjoint cells natural convection flow in tall tilted cavities

    NASA Astrophysics Data System (ADS)

    Báez, Elsa; Nicolás, Alfredo

    2014-10-01

    Natural convection fluid flow in air-filled tall tilted cavities is studied numerically with a direct projection method applied on the unsteady Boussinesq approximation in primitive variables. The study is focused on the so called cat's eyes and multiple disjoint cells as the aspect ratio A and the angle of inclination ϕ of the cavity vary. Results have already been reported with primitive and stream function-vorticity variables. The former are validated with the latter ones, which in turn were validated through mesh size and time-step independence studies. The new results complemented with the previous ones lead to find out the fluid motion and heat transfer invariant properties of this thermal phenomenon, which is the novelty here.

  12. On the convergence of a fully discrete scheme of LES type to physically relevant solutions of the incompressible Navier-Stokes

    NASA Astrophysics Data System (ADS)

    Berselli, Luigi C.; Spirito, Stefano

    2018-06-01

    Obtaining reliable numerical simulations of turbulent fluids is a challenging problem in computational fluid mechanics. The large eddy simulation (LES) models are efficient tools to approximate turbulent fluids, and an important step in the validation of these models is the ability to reproduce relevant properties of the flow. In this paper, we consider a fully discrete approximation of the Navier-Stokes-Voigt model by an implicit Euler algorithm (with respect to the time variable) and a Fourier-Galerkin method (in the space variables). We prove the convergence to weak solutions of the incompressible Navier-Stokes equations satisfying the natural local entropy condition, hence selecting the so-called physically relevant solutions.

  13. Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models

    NASA Astrophysics Data System (ADS)

    Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido

    2016-06-01

    We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.

  14. Transport induced by mean-eddy interaction: II. Analysis of transport processes

    NASA Astrophysics Data System (ADS)

    Ide, Kayo; Wiggins, Stephen

    2015-03-01

    We present a framework for the analysis of transport processes resulting from the mean-eddy interaction in a flow. The framework is based on the Transport Induced by the Mean-Eddy Interaction (TIME) method presented in a companion paper (Ide and Wiggins, 2014) [1]. The TIME method estimates the (Lagrangian) transport across stationary (Eulerian) boundaries defined by chosen streamlines of the mean flow. Our framework proceeds after first carrying out a sequence of preparatory steps that link the flow dynamics to the transport processes. This includes the construction of the so-called "instantaneous flux" as the Hovmöller diagram. Transport processes are studied by linking the signals of the instantaneous flux field to the dynamical variability of the flow. This linkage also reveals how the variability of the flow contributes to the transport. The spatio-temporal analysis of the flux diagram can be used to assess the efficiency of the variability in transport processes. We apply the method to the double-gyre ocean circulation model in the situation where the Rossby-wave mode dominates the dynamic variability. The spatio-temporal analysis shows that the inter-gyre transport is controlled by the circulating eddy vortices in the fast eastward jet region, whereas the basin-scale Rossby waves have very little impact.

  15. Auditory observation of stepping actions can cue both spatial and temporal components of gait in Parkinson׳s disease patients.

    PubMed

    Young, William R; Rodger, Matthew W M; Craig, Cathy M

    2014-05-01

    A common behavioural symptom of Parkinson׳s disease (PD) is reduced step length (SL). Whilst sensory cueing strategies can be effective in increasing SL and reducing gait variability, current cueing strategies conveying spatial or temporal information are generally confined to the use of either visual or auditory cue modalities, respectively. We describe a novel cueing strategy using ecologically-valid 'action-related' sounds (footsteps on gravel) that convey both spatial and temporal parameters of a specific action within a single cue. The current study used a real-time imitation task to examine whether PD affects the ability to re-enact changes in spatial characteristics of stepping actions, based solely on auditory information. In a second experimental session, these procedures were repeated using synthesized sounds derived from recordings of the kinetic interactions between the foot and walking surface. A third experimental session examined whether adaptations observed when participants walked to action-sounds were preserved when participants imagined either real recorded or synthesized sounds. Whilst healthy control participants were able to re-enact significant changes in SL in all cue conditions, these adaptations, in conjunction with reduced variability of SL were only observed in the PD group when walking to, or imagining the recorded sounds. The findings show that while recordings of stepping sounds convey action information to allow PD patients to re-enact and imagine spatial characteristics of gait, synthesis of sounds purely from gait kinetics is insufficient to evoke similar changes in behaviour, perhaps indicating that PD patients have a higher threshold to cue sensorimotor resonant responses. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. An empirical method to cluster objective nebulizer adherence data among adults with cystic fibrosis.

    PubMed

    Hoo, Zhe H; Campbell, Michael J; Curley, Rachael; Wildman, Martin J

    2017-01-01

    The purpose of using preventative inhaled treatments in cystic fibrosis is to improve health outcomes. Therefore, understanding the relationship between adherence to treatment and health outcome is crucial. Temporal variability, as well as absolute magnitude of adherence affects health outcomes, and there is likely to be a threshold effect in the relationship between adherence and outcomes. We therefore propose a pragmatic algorithm-based clustering method of objective nebulizer adherence data to better understand this relationship, and potentially, to guide clinical decisions. This clustering method consists of three related steps. The first step is to split adherence data for the previous 12 months into four 3-monthly sections. The second step is to calculate mean adherence for each section and to score the section based on mean adherence. The third step is to aggregate the individual scores to determine the final cluster ("cluster 1" = very low adherence; "cluster 2" = low adherence; "cluster 3" = moderate adherence; "cluster 4" = high adherence), and taking into account adherence trend as represented by sequential individual scores. The individual scores should be displayed along with the final cluster for clinicians to fully understand the adherence data. We present three cases to illustrate the use of the proposed clustering method. This pragmatic clustering method can deal with adherence data of variable duration (ie, can be used even if 12 months' worth of data are unavailable) and can cluster adherence data in real time. Empirical support for some of the clustering parameters is not yet available, but the suggested classifications provide a structure to investigate parameters in future prospective datasets in which there are accurate measurements of nebulizer adherence and health outcomes.

  17. Structures of the Recurrence Plot of Heart Rate Variability Signal as a Tool for Predicting the Onset of Paroxysmal Atrial Fibrillation

    PubMed Central

    Mohebbi, Maryam; Ghassemian, Hassan; Asl, Babak Mohammadzadeh

    2011-01-01

    This paper aims to propose an effective paroxysmal atrial fibrillation (PAF) predictor which is based on the analysis of the heart rate variability (HRV) signal. Predicting the onset of PAF, based on non-invasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic interventions and to minimize the risks for the patients. This method consists of four steps: Preprocessing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In the next step, the recurrence plot (RP) of HRV signal is obtained and six features are extracted to characterize the basic patterns of the RP. These features consist of length of longest diagonal segments, average length of the diagonal lines, entropy, trapping time, length of longest vertical line, and recurrence trend. In the third step, these features are reduced to three features by the linear discriminant analysis (LDA) technique. Using LDA not only reduces the number of the input features, but also increases the classification accuracy by selecting the most discriminating features. Finally, a support vector machine-based classifier is used to classify the HRV signals. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database which consists of both 30-minutes ECG recordings end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, and positive predictivity were 96.55%, 100%, and 100%, respectively. PMID:22606666

  18. Estimating heterotrophic respiration at large scales: challenges, approaches, and next steps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond-Lamberty, Benjamin; Epron, Daniel; Harden, Jennifer W.

    2016-06-27

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of "Decomposition Functional Types" (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelsmore » to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present and discuss an example clustering analysis to show how model-produced annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from already-existing PFTs. A similar analysis, incorporating observational data, could form a basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with high-performance computing; rigorous testing of analytical results; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR at large scales.« less

  19. Generic and sequence-variant specific molecular assays for the detection of the highly variable Grapevine leafroll-associated virus 3.

    PubMed

    Chooi, Kar Mun; Cohen, Daniel; Pearson, Michael N

    2013-04-01

    Grapevine leafroll-associated virus 3 (GLRaV-3) is an economically important virus, which is found in all grapevine growing regions worldwide. Its accurate detection in nursery and field samples is of high importance for certification schemes and disease management programmes. To reduce false negatives that can be caused by sequence variability, a new universal primer pair was designed against a divergent sequence data set, targeting the open reading frame 4 (heat shock protein 70 homologue gene), and optimised for conventional one-step RT-PCR and one-step SYBR Green real-time RT-PCR assays. In addition, primer pairs for the simultaneous detection of specific GLRaV-3 variants from groups 1, 2, 6 (specifically NZ-1) and the outlier NZ2 variant, and the generic detection of variants from groups 1 to 5 were designed and optimised as a conventional one-step multiplex RT-PCR assay using the plant nad5 gene as an internal control (i.e. one-step hexaplex RT-PCR). Results showed that the generic and variant specific assays detected in vitro RNA transcripts from a range of 1×10(1)-1×10(8) copies of amplicon per μl diluted in healthy total RNA from Vitis vinifera cv. Cabernet Sauvignon. Furthermore, the assays were employed effectively to screen 157 germplasm and 159 commercial field samples. Thus results demonstrate that the GLRaV-3 generic and variant-specific assays are prospective tools that will be beneficial for certification schemes and disease management programmes, as well as biological and epidemiological studies of the divergent GLRaV-3 populations. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Real‐time monitoring and control of the load phase of a protein A capture step

    PubMed Central

    Rüdt, Matthias; Brestrich, Nina; Rolinger, Laura

    2016-01-01

    ABSTRACT The load phase in preparative Protein A capture steps is commonly not controlled in real‐time. The load volume is generally based on an offline quantification of the monoclonal antibody (mAb) prior to loading and on a conservative column capacity determined by resin‐life time studies. While this results in a reduced productivity in batch mode, the bottleneck of suitable real‐time analytics has to be overcome in order to enable continuous mAb purification. In this study, Partial Least Squares Regression (PLS) modeling on UV/Vis absorption spectra was applied to quantify mAb in the effluent of a Protein A capture step during the load phase. A PLS model based on several breakthrough curves with variable mAb titers in the HCCF was successfully calibrated. The PLS model predicted the mAb concentrations in the effluent of a validation experiment with a root mean square error (RMSE) of 0.06 mg/mL. The information was applied to automatically terminate the load phase, when a product breakthrough of 1.5 mg/mL was reached. In a second part of the study, the sensitivity of the method was further increased by only considering small mAb concentrations in the calibration and by subtracting an impurity background signal. The resulting PLS model exhibited a RMSE of prediction of 0.01 mg/mL and was successfully applied to terminate the load phase, when a product breakthrough of 0.15 mg/mL was achieved. The proposed method has hence potential for the real‐time monitoring and control of capture steps at large scale production. This might enhance the resin capacity utilization, eliminate time‐consuming offline analytics, and contribute to the realization of continuous processing. Biotechnol. Bioeng. 2017;114: 368–373. © 2016 The Authors. Biotechnology and Bioengineering published by Wiley Periodicals, Inc. PMID:27543789

  1. Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm

    NASA Astrophysics Data System (ADS)

    Mathai, J.; Mujumdar, P.

    2017-12-01

    A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.

  2. Gaia DR1 documentation Chapter 6: Variability

    NASA Astrophysics Data System (ADS)

    Eyer, L.; Rimoldini, L.; Guy, L.; Holl, B.; Clementini, G.; Cuypers, J.; Mowlavi, N.; Lecoeur-Taïbi, I.; De Ridder, J.; Charnas, J.; Nienartowicz, K.

    2017-12-01

    This chapter describes the photometric variability processing of the Gaia DR1 data. Coordination Unit 7 is responsible for the variability analysis of over a billion celestial sources. In particular the definition, design, development, validation and provision of a software package for the data processing of photometrically variable objects. Data Processing Centre Geneva (DPCG) responsibilities cover all issues related to the computational part of the CU7 analysis. These span: hardware provisioning, including selection, deployment and optimisation of suitable hardware, choosing and developing software architecture, defining data and scientific workflows as well as operational activities such as configuration management, data import, time series reconstruction, storage and processing handling, visualisation and data export. CU7/DPCG is also responsible for interaction with other DPCs and CUs, software and programming training for the CU7 members, scientific software quality control and management of software and data lifecycle. Details about the specific data treatment steps of the Gaia DR1 data products are found in Eyer et al. (2017) and are not repeated here. The variability content of the Gaia DR1 focusses on a subsample of Cepheids and RR Lyrae stars around the South ecliptic pole, showcasing the performance of the Gaia photometry with respect to variable objects.

  3. New insights into time series analysis. II - Non-correlated observations

    NASA Astrophysics Data System (ADS)

    Ferreira Lopes, C. E.; Cross, N. J. G.

    2017-08-01

    Context. Statistical parameters are used to draw conclusions in a vast number of fields such as finance, weather, industrial, and science. These parameters are also used to identify variability patterns on photometric data to select non-stochastic variations that are indicative of astrophysical effects. New, more efficient, selection methods are mandatory to analyze the huge amount of astronomical data. Aims: We seek to improve the current methods used to select non-stochastic variations on non-correlated data. Methods: We used standard and new data-mining parameters to analyze non-correlated data to find the best way to discriminate between stochastic and non-stochastic variations. A new approach that includes a modified Strateva function was performed to select non-stochastic variations. Monte Carlo simulations and public time-domain data were used to estimate its accuracy and performance. Results: We introduce 16 modified statistical parameters covering different features of statistical distribution such as average, dispersion, and shape parameters. Many dispersion and shape parameters are unbound parameters, I.e. equations that do not require the calculation of average. Unbound parameters are computed with single loop and hence decreasing running time. Moreover, the majority of these parameters have lower errors than previous parameters, which is mainly observed for distributions with few measurements. A set of non-correlated variability indices, sample size corrections, and a new noise model along with tests of different apertures and cut-offs on the data (BAS approach) are introduced. The number of mis-selections are reduced by about 520% using a single waveband and 1200% combining all wavebands. On the other hand, the even-mean also improves the correlated indices introduced in Paper I. The mis-selection rate is reduced by about 18% if the even-mean is used instead of the mean to compute the correlated indices in the WFCAM database. Even-statistics allows us to improve the effectiveness of both correlated and non-correlated indices. Conclusions: The selection of non-stochastic variations is improved by non-correlated indices. The even-averages provide a better estimation of mean and median for almost all statistical distributions analyzed. The correlated variability indices, which are proposed in the first paper of this series, are also improved if the even-mean is used. The even-parameters will also be useful for classifying light curves in the last step of this project. We consider that the first step of this project, where we set new techniques and methods that provide a huge improvement on the efficiency of selection of variable stars, is now complete. Many of these techniques may be useful for a large number of fields. Next, we will commence a new step of this project regarding the analysis of period search methods.

  4. Iterative spectral methods and spectral solutions to compressible flows

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Zang, T. A.

    1982-01-01

    A spectral multigrid scheme is described which can solve pseudospectral discretizations of self-adjoint elliptic problems in O(N log N) operations. An iterative technique for efficiently implementing semi-implicit time-stepping for pseudospectral discretizations of Navier-Stokes equations is discussed. This approach can handle variable coefficient terms in an effective manner. Pseudospectral solutions of compressible flow problems are presented. These include one dimensional problems and two dimensional Euler solutions. Results are given both for shock-capturing approaches and for shock-fitting ones.

  5. TOWARD QUANTITATIVE OPTICAL COHERENCE TOMOGRAPHY ANGIOGRAPHY: Visualizing Blood Flow Speeds in Ocular Pathology Using Variable Interscan Time Analysis.

    PubMed

    Ploner, Stefan B; Moult, Eric M; Choi, WooJhon; Waheed, Nadia K; Lee, ByungKun; Novais, Eduardo A; Cole, Emily D; Potsaid, Benjamin; Husvogt, Lennart; Schottenhamml, Julia; Maier, Andreas; Rosenfeld, Philip J; Duker, Jay S; Hornegger, Joachim; Fujimoto, James G

    2016-12-01

    Currently available optical coherence tomography angiography systems provide information about blood flux but only limited information about blood flow speed. The authors develop a method for mapping the previously proposed variable interscan time analysis (VISTA) algorithm into a color display that encodes relative blood flow speed. Optical coherence tomography angiography was performed with a 1,050 nm, 400 kHz A-scan rate, swept source optical coherence tomography system using a 5 repeated B-scan protocol. Variable interscan time analysis was used to compute the optical coherence tomography angiography signal from B-scan pairs having 1.5 millisecond and 3.0 milliseconds interscan times. The resulting VISTA data were then mapped to a color space for display. The authors evaluated the VISTA visualization algorithm in normal eyes (n = 2), nonproliferative diabetic retinopathy eyes (n = 6), proliferative diabetic retinopathy eyes (n = 3), geographic atrophy eyes (n = 4), and exudative age-related macular degeneration eyes (n = 2). All eyes showed blood flow speed variations, and all eyes with pathology showed abnormal blood flow speeds compared with controls. The authors developed a novel method for mapping VISTA into a color display, allowing visualization of relative blood flow speeds. The method was found useful, in a small case series, for visualizing blood flow speeds in a variety of ocular diseases and serves as a step toward quantitative optical coherence tomography angiography.

  6. A Practical Framework Toward Prediction of Breaking Force and Disintegration of Tablet Formulations Using Machine Learning Tools.

    PubMed

    Akseli, Ilgaz; Xie, Jingjin; Schultz, Leon; Ladyzhynsky, Nadia; Bramante, Tommasina; He, Xiaorong; Deanne, Rich; Horspool, Keith R; Schwabe, Robert

    2017-01-01

    Enabling the paradigm of quality by design requires the ability to quantitatively correlate material properties and process variables to measureable product performance attributes. Conventional, quality-by-test methods for determining tablet breaking force and disintegration time usually involve destructive tests, which consume significant amount of time and labor and provide limited information. Recent advances in material characterization, statistical analysis, and machine learning have provided multiple tools that have the potential to develop nondestructive, fast, and accurate approaches in drug product development. In this work, a methodology to predict the breaking force and disintegration time of tablet formulations using nondestructive ultrasonics and machine learning tools was developed. The input variables to the model include intrinsic properties of formulation and extrinsic process variables influencing the tablet during manufacturing. The model has been applied to predict breaking force and disintegration time using small quantities of active pharmaceutical ingredient and prototype formulation designs. The novel approach presented is a step forward toward rational design of a robust drug product based on insight into the performance of common materials during formulation and process development. It may also help expedite drug product development timeline and reduce active pharmaceutical ingredient usage while improving efficiency of the overall process. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  7. Statistical Frequency-Dependent Analysis of Trial-to-Trial Variability in Single Time Series by Recurrence Plots.

    PubMed

    Tošić, Tamara; Sellers, Kristin K; Fröhlich, Flavio; Fedotenkova, Mariia; Beim Graben, Peter; Hutt, Axel

    2015-01-01

    For decades, research in neuroscience has supported the hypothesis that brain dynamics exhibits recurrent metastable states connected by transients, which together encode fundamental neural information processing. To understand the system's dynamics it is important to detect such recurrence domains, but it is challenging to extract them from experimental neuroscience datasets due to the large trial-to-trial variability. The proposed methodology extracts recurrent metastable states in univariate time series by transforming datasets into their time-frequency representations and computing recurrence plots based on instantaneous spectral power values in various frequency bands. Additionally, a new statistical inference analysis compares different trial recurrence plots with corresponding surrogates to obtain statistically significant recurrent structures. This combination of methods is validated by applying it to two artificial datasets. In a final study of visually-evoked Local Field Potentials in partially anesthetized ferrets, the methodology is able to reveal recurrence structures of neural responses with trial-to-trial variability. Focusing on different frequency bands, the δ-band activity is much less recurrent than α-band activity. Moreover, α-activity is susceptible to pre-stimuli, while δ-activity is much less sensitive to pre-stimuli. This difference in recurrence structures in different frequency bands indicates diverse underlying information processing steps in the brain.

  8. Statistical Frequency-Dependent Analysis of Trial-to-Trial Variability in Single Time Series by Recurrence Plots

    PubMed Central

    Tošić, Tamara; Sellers, Kristin K.; Fröhlich, Flavio; Fedotenkova, Mariia; beim Graben, Peter; Hutt, Axel

    2016-01-01

    For decades, research in neuroscience has supported the hypothesis that brain dynamics exhibits recurrent metastable states connected by transients, which together encode fundamental neural information processing. To understand the system's dynamics it is important to detect such recurrence domains, but it is challenging to extract them from experimental neuroscience datasets due to the large trial-to-trial variability. The proposed methodology extracts recurrent metastable states in univariate time series by transforming datasets into their time-frequency representations and computing recurrence plots based on instantaneous spectral power values in various frequency bands. Additionally, a new statistical inference analysis compares different trial recurrence plots with corresponding surrogates to obtain statistically significant recurrent structures. This combination of methods is validated by applying it to two artificial datasets. In a final study of visually-evoked Local Field Potentials in partially anesthetized ferrets, the methodology is able to reveal recurrence structures of neural responses with trial-to-trial variability. Focusing on different frequency bands, the δ-band activity is much less recurrent than α-band activity. Moreover, α-activity is susceptible to pre-stimuli, while δ-activity is much less sensitive to pre-stimuli. This difference in recurrence structures in different frequency bands indicates diverse underlying information processing steps in the brain. PMID:26834580

  9. The role of environmental variables in structuring landscape-scale species distributions in seafloor habitats.

    PubMed

    Kraan, Casper; Aarts, Geert; Van der Meer, Jaap; Piersma, Theunis

    2010-06-01

    Ongoing statistical sophistication allows a shift from describing species' spatial distributions toward statistically disentangling the possible roles of environmental variables in shaping species distributions. Based on a landscape-scale benthic survey in the Dutch Wadden Sea, we show the merits of spatially explicit generalized estimating equations (GEE). The intertidal macrozoobenthic species, Macoma balthica, Cerastoderma edule, Marenzelleria viridis, Scoloplos armiger, Corophium volutator, and Urothoe poseidonis served as test cases, with median grain-size and inundation time as typical environmental explanatory variables. GEEs outperformed spatially naive generalized linear models (GLMs), and removed much residual spatial structure, indicating the importance of median grain-size and inundation time in shaping landscape-scale species distributions in the intertidal. GEE regression coefficients were smaller than those attained with GLM, and GEE standard errors were larger. The best fitting GEE for each species was used to predict species' density in relation to median grain-size and inundation time. Although no drastic changes were noted compared to previous work that described habitat suitability for benthic fauna in the Wadden Sea, our predictions provided more detailed and unbiased estimates of the determinants of species-environment relationships. We conclude that spatial GEEs offer the necessary methodological advances to further steps toward linking pattern to process.

  10. JCDSA: a joint covariate detection tool for survival analysis on tumor expression profiles.

    PubMed

    Wu, Yiming; Liu, Yanan; Wang, Yueming; Shi, Yan; Zhao, Xudong

    2018-05-29

    Survival analysis on tumor expression profiles has always been a key issue for subsequent biological experimental validation. It is crucial how to select features which closely correspond to survival time. Furthermore, it is important how to select features which best discriminate between low-risk and high-risk group of patients. Common features derived from the two aspects may provide variable candidates for prognosis of cancer. Based on the provided two-step feature selection strategy, we develop a joint covariate detection tool for survival analysis on tumor expression profiles. Significant features, which are not only consistent with survival time but also associated with the categories of patients with different survival risks, are chosen. Using the miRNA expression data (Level 3) of 548 patients with glioblastoma multiforme (GBM) as an example, miRNA candidates for prognosis of cancer are selected. The reliability of selected miRNAs using this tool is demonstrated by 100 simulations. Furthermore, It is discovered that significant covariates are not directly composed of individually significant variables. Joint covariate detection provides a viewpoint for selecting variables which are not individually but jointly significant. Besides, it helps to select features which are not only consistent with survival time but also associated with prognosis risk. The software is available at http://bio-nefu.com/resource/jcdsa .

  11. Reliability of Fitness Tests Using Methods and Time Periods Common in Sport and Occupational Management

    PubMed Central

    Burnstein, Bryan D.; Steele, Russell J.; Shrier, Ian

    2011-01-01

    Context: Fitness testing is used frequently in many areas of physical activity, but the reliability of these measurements under real-world, practical conditions is unknown. Objective: To evaluate the reliability of specific fitness tests using the methods and time periods used in the context of real-world sport and occupational management. Design: Cohort study. Setting: Eighteen different Cirque du Soleil shows. Patients or Other Participants: Cirque du Soleil physical performers who completed 4 consecutive tests (6-month intervals) and were free of injury or illness at each session (n = 238 of 701 physical performers). Intervention(s): Performers completed 6 fitness tests on each assessment date: dynamic balance, Harvard step test, handgrip, vertical jump, pull-ups, and 60-second jump test. Main Outcome Measure(s): We calculated the intraclass coefficient (ICC) and limits of agreement between baseline and each time point and the ICC over all 4 time points combined. Results: Reliability was acceptable (ICC > 0.6) over an 18-month time period for all pairwise comparisons and all time points together for the handgrip, vertical jump, and pull-up assessments. The Harvard step test and 60-second jump test had poor reliability (ICC < 0.6) between baseline and other time points. When we excluded the baseline data and calculated the ICC for 6-month, 12-month, and 18-month time points, both the Harvard step test and 60-second jump test demonstrated acceptable reliability. Dynamic balance was unreliable in all contexts. Limit-of-agreement analysis demonstrated considerable intraindividual variability for some tests and a learning effect by administrators on others. Conclusions: Five of the 6 tests in this battery had acceptable reliability over an 18-month time frame, but the values for certain individuals may vary considerably from time to time for some tests. Specific tests may require a learning period for administrators. PMID:22488138

  12. Predictors of symptom congruence among patients with acute myocardial infarction.

    PubMed

    Fox-Wasylyshyn, Susan

    2012-01-01

    The extent of congruence between one's symptom experience and preconceived ideas about the nature of myocardial infarction symptoms (ie, symptom congruence) can influence when acute myocardial infarction (AMI) patients seek medical care. Lengthy delays impede timely receipt of medical interventions and result in greater morbidity and mortality. However, little is known about the factors that contribute to symptom congruence. Hence, the purpose of this study was to examine how AMI patients' symptom experiences and patients' demographic and clinical characteristics contribute to symptom congruence. Secondary data analyses were performed on interview data that were collected from 135 AMI patients. Hierarchical multiple regression analyses were used to examine how specific symptom attributes and demographic and clinical characteristics contribute to symptom congruence. Chest pain/discomfort and other symptom variables (type and location) were included in step 1 of the analysis, whereas symptom severity and demographic and clinical factors were included in step 2. In a second analysis, quality descriptors of discomfort replaced chest pain/discomfort in step 1. Although chest pain/discomfort, and quality descriptors of heaviness and cutting were significant in step 1 of their respective analyses, all became nonsignificant when the variables in step 2 were added to the analyses. Severe discomfort (β = .29, P < .001), history of AMI (β = .21, P < .01), and male sex (β = .17, P < .05) were significant predictors of symptom congruence in the first analysis. Only severe discomfort (β = .23, P < .01) and history of AMI (β = .17, P < .05) were predictive of symptom congruence in the second analysis. Although the location and quality of discomfort were important components of symptom congruence, symptom severity outweighed their importance. Nonsevere symptoms were less likely to meet the expectations of AMI symptoms by those experiencing this event. Those without a previous history of AMI also experienced lower levels of symptom congruence. Implications pertaining to these findings are discussed.

  13. Variability of total step activity in children with cerebral palsy: influence of definition of a day on participant retention within the study.

    PubMed

    Wilson, Nichola C; Mudge, Suzie; Stott, N Susan

    2016-08-20

    Activity monitoring is important to establish accurate daily physical activity levels in children with cerebral palsy (CP). However, few studies address issues around inclusion or exclusion of step count data; in particular, how a valid day should be defined and what impact different lengths of monitoring have on retention of participant data within a study. This study assessed how different 'valid day' definitions influenced inclusion of participant data in final analyses and the subsequent variability of the data. Sixty-nine children with CP were fitted with a StepWatch™ Activity Monitor and instructed to wear it for a week. Data analysis used two broad definitions of a day, based on either number of steps in a 24 h monitoring period or the number of hours of recorded activity in a 24 h monitoring period. Eight children either did not use the monitor, or used it for only 1 day. The remaining 61 children provided 2 valid days of monitoring defined as >100 recorded steps per 24 h period and 55 (90 %) completed 2 valid days of monitoring with ≥10 h recorded activity per 24 h period. Performance variability in daily step count was lower across 2 days of monitoring when a valid day was defined as ≥10 h recorded activity per 24 h period (ICC = 0.765) and, higher when the definition >100 recorded steps per 24 h period (ICC = 0.62). Only 46 participants (75 %) completed 5 days of monitoring with >100 recorded steps per 24 h period and only 23 (38 %) achieved 5 days of monitoring with ≥10 h recorded activity per 24 h period. Datasets of participants who functioned at GMFCS level II were differentially excluded when the criteria for inclusion in final analysis was 5 valid days of ≥10 h recorded activity per 24 h period, leaving datasets available for only 8 of 32 participant datasets retained in the study. We conclude that changes in definition of a valid day have significant impacts on both inclusion of participant data in final analysis and measured variability of total step count.

  14. 100 or 30 years after Janeway or Bartter, Healthwatch helps avoid 'flying blind'.

    PubMed

    Cornélissen, Germaine; Halberg, Franz; Bakken, Earl; Singh, Ram B; Otsuka, Kuniaki; Tomlinson, Brian; Delcourt, Alain; Toussaint, Guy; Bathina, Srilakshmi; Schwartzkopff, Othild; Wang, Zhengrong; Tarquini, Roberto; Perfetto, Federico; Pantaleoni, Giancarlo; Jozsa, Rita; Delmore, Patrick A; Nolley, Ellis

    2004-10-01

    Longitudinal records of blood pressure (BP) and heart rate (HR) around the clock for days, weeks, months, years, and even decades obtained by manual self-measurements (during waking) and/or automatically by ambulatory monitoring reveal, in addition to well-known large within-day variation, also considerable day-to-day variability in most people, whether normotensive or hypertensive. As a first step, the circadian rhythm is considered along with gender differences and changes as a function of age to derive time-specified reference values (chronodesms), while reference values accumulate to also account for the circaseptan variation. Chronodesms serve for the interpretation of single measurements and of circadian and other rhythm parameters. Refined diagnoses can thus be obtained, namely MESOR-hypertension when the chronome-adjusted mean value (MESOR) of BP is above the upper limit of acceptability, excessive pulse pressure (EPP) when the difference in MESOR between the systolic (S) and diastolic (D) BP is too large, CHAT (circadian hyper-amplitude tension) when the circadian BP amplitude is excessive, DHRV (decreased heart rate variability) when the standard deviation (SD) of HR is below the acceptable range, and/or ecphasia when the overall high values recurring each day occur at an odd time (a condition also contributing to the risk associated with 'non-dipping'). A non-parametric approach consisting of a computer comparison of the subject's profile with the time-varying limits of acceptability further serves as a guide to optimize the efficacy of any needed treatment by timing its administration (chronotherapy) and selecting a treatment schedule best suited to normalize abnormal patterns in BP and/or HR. The merit of the proposed chronobiological approach to BP screening, diagnosis and therapy (chronotheranostics) is assessed in the light of outcome studies. Elevated risk associated with abnormal patterns of BP and/or HR variability, even when most if not all measurements lie within the range of acceptable values, becomes amenable to treatment as a critical step toward prevention (prehabilitation) to reduce the need for rehabilitation (the latter often after costly surgical intervention).

  15. An Intelligent Weather Station

    PubMed Central

    Mestre, Gonçalo; Ruano, Antonio; Duarte, Helder; Silva, Sergio; Khosravani, Hamid; Pesteh, Shabnam; Ferreira, Pedro M.; Horta, Ricardo

    2015-01-01

    Accurate measurements of global solar radiation, atmospheric temperature and relative humidity, as well as the availability of the predictions of their evolution over time, are important for different areas of applications, such as agriculture, renewable energy and energy management, or thermal comfort in buildings. For this reason, an intelligent, light-weight, self-powered and portable sensor was developed, using a nearest-neighbors (NEN) algorithm and artificial neural network (ANN) models as the time-series predictor mechanisms. The hardware and software design of the implemented prototype are described, as well as the forecasting performance related to the three atmospheric variables, using both approaches, over a prediction horizon of 48-steps-ahead. PMID:26690433

  16. An Intelligent Weather Station.

    PubMed

    Mestre, Gonçalo; Ruano, Antonio; Duarte, Helder; Silva, Sergio; Khosravani, Hamid; Pesteh, Shabnam; Ferreira, Pedro M; Horta, Ricardo

    2015-12-10

    Accurate measurements of global solar radiation, atmospheric temperature and relative humidity, as well as the availability of the predictions of their evolution over time, are important for different areas of applications, such as agriculture, renewable energy and energy management, or thermal comfort in buildings. For this reason, an intelligent, light-weight, self-powered and portable sensor was developed, using a nearest-neighbors (NEN) algorithm and artificial neural network (ANN) models as the time-series predictor mechanisms. The hardware and software design of the implemented prototype are described, as well as the forecasting performance related to the three atmospheric variables, using both approaches, over a prediction horizon of 48-steps-ahead.

  17. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  18. Additive schemes for certain operator-differential equations

    NASA Astrophysics Data System (ADS)

    Vabishchevich, P. N.

    2010-12-01

    Unconditionally stable finite difference schemes for the time approximation of first-order operator-differential systems with self-adjoint operators are constructed. Such systems arise in many applied problems, for example, in connection with nonstationary problems for the system of Stokes (Navier-Stokes) equations. Stability conditions in the corresponding Hilbert spaces for two-level weighted operator-difference schemes are obtained. Additive (splitting) schemes are proposed that involve the solution of simple problems at each time step. The results are used to construct splitting schemes with respect to spatial variables for nonstationary Navier-Stokes equations for incompressible fluid. The capabilities of additive schemes are illustrated using a two-dimensional model problem as an example.

  19. Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko

    2015-04-01

    One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1/24 degree, if in the end you only look at monthly runoff? In this study an attempt is made to link time and space scales in the VIC model, to study the added value of a higher spatial resolution-model for different time steps. In order to do this, four different VIC models were constructed for the Thur basin in North-Eastern Switzerland (1700 km²), a tributary of the Rhine: one lumped model, and three spatially distributed models with a resolution of respectively 1x1 km, 5x5 km, and 10x10 km. All models are run at an hourly time step and aggregated and calibrated for different time steps (hourly, daily, monthly, yearly) using a novel Hierarchical Latin Hypercube Sampling Technique (Vořechovský, 2014). For each time and space scale, several diagnostics like Nash-Sutcliffe efficiency, Kling-Gupta efficiency, all the quantiles of the discharge etc., are calculated in order to compare model performance over different time and space scales for extreme events like floods and droughts. Next to that, the effect of time and space scale on the parameter distribution can be studied. In the end we hope to find a link for optimal time and space scale combinations.

  20. Solution of elliptic partial differential equations by fast Poisson solvers using a local relaxation factor. 2: Two-step method

    NASA Technical Reports Server (NTRS)

    Chang, S. C.

    1986-01-01

    A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.

  1. Global trends in vegetation phenology from 32-year GEOV1 leaf area index time series

    NASA Astrophysics Data System (ADS)

    Verger, Aleixandre; Baret, Frédéric; Weiss, Marie; Filella, Iolanda; Peñuelas, Josep

    2013-04-01

    Phenology is a critical component in understanding ecosystem response to climate variability. Long term data records from global mapping satellite platforms are valuable tools for monitoring vegetation responses to climate change at the global scale. Phenology satellite products and trend detection from satellite time series are expected to contribute to improve our understanding of climate forcing on vegetation dynamics. The capacity of monitoring ecosystem responses to global climate change was evaluated in this study from the 32-year time series of global Leaf Area Index (LAI) which have been recently produced within the geoland2 project. The long term GEOV1 LAI products were derived from NOAA/AVHRR (1981 to 2000) and SPOT/VGT (1999 to the present) with specific emphasis on consistency and continuity. Since mid-November, GEOV1 LAI products are freely available to the scientific community at geoland2 portal (www.geoland2.eu/core-mapping-services/biopar.html). These products are distributed at a dekadal time step for the period 1981-2000 and 2000-2012 at 0.05° and 1/112°, respectively. The use of GEOV1 data covering a long time period and providing information at dense time steps are expected to increase the reliability of trend detection. In this study, GEOV1 LAI time series aggregated at 0.5° spatial resolution are used. The CACAO (Consistent Adjustment of the Climatology to Actual Observations) method (Verger et al, 2013) was applied to characterize seasonal anomalies as well as identify trends. For a given pixel, CACAO computes, for each season, the time shift and the amplitude difference between the current temporal profile and the climatology computed over the 32 years. These CACAO parameters allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. Interannual variations in the timing of the Start of Season and End of Season, Season Length and LAI level in the peak of the growing season are analyzed. Trend analysis with robust statistical test of significance is conducted. Climate variables (precipitation, temperature, radiation) are then used to interpret the anomaly patterns detected in vegetation response.

  2. Quantification of the inherent uncertainty in the relaxation modulus and creep compliance of asphalt mixes

    NASA Astrophysics Data System (ADS)

    Kassem, Hussein A.; Chehab, Ghassan R.; Najjar, Shadi S.

    2017-08-01

    Advanced material characterization of asphalt concrete is essential for realistic and accurate performance prediction of flexible pavements. However, such characterization requires rigorous testing regimes that involve mechanical testing of a large number of laboratory samples at various conditions and set-ups. Advanced measurement instrumentation in addition to meticulous and accurate data analysis and analytical representation are also of high importance. Such steps as well as the heterogeneous nature of asphalt concrete (AC) constitute major factors of inherent variability. Thus, it is imperative to model and quantify the variability of the needed asphalt material's properties, mainly the linear viscoelastic response functions such as: relaxation modulus, E(t), and creep compliance, D(t). The objective of this paper is to characterize the inherent uncertainty of both E(t) and D(t) over the time domain of their master curves. This is achieved through a probabilistic framework using Monte Carlo simulations and First Order approximations, utilizing E^{*} data for six AC mixes with at least eight replicates per mix. The study shows that the inherent variability, presented by the coefficient of variation (COV), in E(t) and D(t) is low at small reduced times, and increases with the increase in reduced time. At small reduced times, the COV in E(t) and D(t) are similar in magnitude; however, differences become significant at large reduced times. Additionally, the probability distributions and COVs of E(t) and D(t) are mix dependent. Finally, a case study is considered in which the inherent uncertainty in D(t) is forward propagated to assess the effect of variability on the predicted number of cycles to fatigue failure of an asphalt mix.

  3. Integrated Microfluidic Devices for Automated Microarray-Based Gene Expression and Genotyping Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Robin H.; Lodes, Mike; Fuji, H. Sho; Danley, David; McShea, Andrew

    Microarray assays typically involve multistage sample processing and fluidic handling, which are generally labor-intensive and time-consuming. Automation of these processes would improve robustness, reduce run-to-run and operator-to-operator variation, and reduce costs. In this chapter, a fully integrated and self-contained microfluidic biochip device that has been developed to automate the fluidic handling steps for microarray-based gene expression or genotyping analysis is presented. The device consists of a semiconductor-based CustomArray® chip with 12,000 features and a microfluidic cartridge. The CustomArray was manufactured using a semiconductor-based in situ synthesis technology. The micro-fluidic cartridge consists of microfluidic pumps, mixers, valves, fluid channels, and reagent storage chambers. Microarray hybridization and subsequent fluidic handling and reactions (including a number of washing and labeling steps) were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. Electrochemical micropumps were integrated in the cartridge to provide pumping of liquid solutions. A micromixing technique based on gas bubbling generated by electrochemical micropumps was developed. Low-cost check valves were implemented in the cartridge to prevent cross-talk of the stored reagents. Gene expression study of the human leukemia cell line (K562) and genotyping detection and sequencing of influenza A subtypes have been demonstrated using this integrated biochip platform. For gene expression assays, the microfluidic CustomArray device detected sample RNAs with a concentration as low as 0.375 pM. Detection was quantitative over more than three orders of magnitude. Experiment also showed that chip-to-chip variability was low indicating that the integrated microfluidic devices eliminate manual fluidic handling steps that can be a significant source of variability in genomic analysis. The genotyping results showed that the device identified influenza A hemagglutinin and neuraminidase subtypes and sequenced portions of both genes, demonstrating the potential of integrated microfluidic and microarray technology for multiple virus detection. The device provides a cost-effective solution to eliminate labor-intensive and time-consuming fluidic handling steps and allows microarray-based DNA analysis in a rapid and automated fashion.

  4. Hydrological fine-structure evolution as a proxy of water mass property changes in the Tyrrhenian Sea

    NASA Astrophysics Data System (ADS)

    Durante, Sara; Schroeder, Katrin; Sparnocchia, Stefania; Mazzei, Luca; Borghini, Mireno; Pierini, Stefano

    2017-04-01

    The variability of the Tyrrhenian basin water masses properties, as inferred by the evolution of the typical step-like profile of the water column, is analyzed from 2003 to 2016. The dataset contains hydrological time series obtained in two deep control stations at a depth of about 3500 m. The study follows the evolution of double diffusion processes (a coherent basin feature) that leads to well-defined and permanent staircases. In each profile, four main steps can be recognized between 400 m and 2500 m both in conservative temperature (CT) and absolute salinity (SA), the main one having a thickness of about 400 m. The Tyrrhenian Sea is a not particularly dynamic basin if compared with other areas of the Mediterranean Sea, yet the staircases show large hydrological and depth changes. In particular, an increase of CT and SA and an uplifting are observed in the second part of the time series. Such changes can be due to both internal and external forcing. To discern the nature of the forcing, a suitable method [1] has been applied to our case study. Changes in SA are found to be similar along both isobars and neutral surfaces, so they can be ascribed to an external forcing. On the other hand, the CT shows different trends along isobars and neutral surfaces: this suggests that internal forcing can play an important role. The new Western Mediterranean Deep Water formed in severe winters after 2004-2005 and later in the Gulf of Lion (during the so-called Western Mediterranean Transition [2]) is suggested to be an external forcing producing the observed variability. Oscillatory movements of the neutral surfaces can also be observed after 2010. Computation of heat and salt fluxes (both for the whole water column and for each single step) sheds light on the conservative character of hydrological parameters of the step-system. [1] Bindoff, N.L., McDougall, T.J., 1994. J. Phys. Oceanogr. 24, 1137-1152. [2] Schroeder, K., G. P.Gasparini, M. Tangherlini, and M. Astraldi, 2006. Geophys. Res. Lett., 33, L21607, doi:10.1029/2006GL027121.

  5. Transient effects of harsh luminous conditions on the visual performance of aviators in a civil aircraft cockpit.

    PubMed

    Yang, Biao; Lin, Yandan; Sun, Yaojie

    2013-03-01

    The aim of this work was to examine how harsh luminous conditions in a cockpit, such as lightning in a thunderstorm or direct sunlight immediately after an aircraft passes through clouds, may affect the visual performance of pilots, and how to improve it. Such lighting conditions can result in the temporary visual impairment of aviators, which may greatly increase the risk of accidents. Tests were carried out in a full-scale simulator cockpit in which two kinds of dynamic lighting scenes, namely pulse changed and step changed lighting, were used to represent harsh luminous conditions. Visual acuity (VA), reaction time (RT) and identification accuracy (IA) were recorded as dependent variables. Data analysis results indicate that standardized VA values decreased significantly in both pulsing and step conditions in comparison with the dark condition. Standardized RT values increased significantly in the step condition; on the contrary, less reaction time was observed in the pulsing condition. Such effects could be reduced by an ambient illumination provided by a fluorescent lamp in both conditions. The results are to be used as a principle for optimizing lighting design with a thunderstorm light. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. Development and acceleration of unstructured mesh-based cfd solver

    NASA Astrophysics Data System (ADS)

    Emelyanov, V.; Karpenko, A.; Volkov, K.

    2017-06-01

    The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  7. Age-related cognitive task effects on gait characteristics: do different working memory components make a difference?

    PubMed

    Qu, Xingda

    2014-10-27

    Though it is well recognized that gait characteristics are affected by concurrent cognitive tasks, how different working memory components contribute to dual task effects on gait is still unknown. The objective of the present study was to investigate dual-task effects on gait characteristics, specifically the application of cognitive tasks involving different working memory components. In addition, we also examined age-related differences in such dual-task effects. Three cognitive tasks (i.e. 'Random Digit Generation', 'Brooks' Spatial Memory', and 'Counting Backward') involving different working memory components were examined. Twelve young (6 males and 6 females, 20 ~ 25 years old) and 12 older participants (6 males and 6 females, 60 ~ 72 years old) took part in two phases of experiments. In the first phase, each cognitive task was defined at three difficulty levels, and perceived difficulty was compared across tasks. The cognitive tasks perceived to be equally difficult were selected for the second phase. In the second phase, four testing conditions were defined, corresponding to a baseline and the three equally difficult cognitive tasks. Participants walked on a treadmill at their self-selected comfortable speed in each testing condition. Body kinematics were collected during treadmill walking, and gait characteristics were assessed using spatial-temporal gait parameters. Application of the concurrent Brooks' Spatial Memory task led to longer step times compared to the baseline condition. Larger step width variability was observed in both the Brooks' Spatial Memory and Counting Backward dual-task conditions than in the baseline condition. In addition, cognitive task effects on step width variability differed between two age groups. In particular, the Brooks' Spatial Memory task led to significantly larger step width variability only among older adults. These findings revealed that cognitive tasks involving the visuo-spatial sketchpad interfered with gait more severely in older versus young adults. Thus, dual-task training, in which a cognitive task involving the visuo-spatial sketchpad (e.g. the Brooks' Spatial Memory task) is concurrently performed with walking, could be beneficial to mitigate impairments in gait among older adults.

  8. Uncertainty and Variability

    EPA Pesticide Factsheets

    EPA ExpoBox is a toolbox for exposure assessors. Its purpose is to provide a compendium of exposure assessment and risk characterization tools that will present comprehensive step-by-step guidance and links to relevant exposure assessment data bases

  9. Controlling acrylamide in French fry and potato chip models and a mathematical model of acrylamide formation: acrylamide: acidulants, phytate and calcium.

    PubMed

    Park, Yeonhwa; Yang, Heewon; Storkson, Jayne M; Albright, Karen J; Liu, Wei; Lindsay, Robert C; Pariza, Michael W

    2005-01-01

    We previously reported that in potato chip and French fry models, the formation of acrylamide can be reduced by controlling pH during processing steps, either by organic (acidulants) or inorganic acids. Use of phytate, a naturally occurring chelator, with or without Ca++ (or divalent ions), can reduce acrylamide formation in both models. However, since phytate itself is acidic, the question remains as to whether the effect of phytate is due to pH alone or to additional effects. In the French fry model, the effects on acrylamide formation of pH, phytate, and/or Ca++ in various combinations were tested in either blanching or soaking (after blanching) steps. All treatments significantly reduced acrylamide levels compared to control. Among variables tested, pH may be the single most important factor for reducing acrylamide levels, while there were independent effects of phytate and/or Ca++ in this French fry model. We also developed a mathematical formula to estimate the final concentration of acrylamide in a potato chip model, using variables that can affect acrylamide formation: glucose and asparagine concentrations, cut potato surface area and shape, cooking temperature and time, and other processing conditions.

  10. Overload From Anxiety: A Non-Motor Cause for Gait Impairments in Parkinson's Disease.

    PubMed

    Ehgoetz Martens, Kaylena A; Silveira, Carolina R A; Intzandt, Brittany N; Almeida, Quincy J

    2018-01-01

    Threatening situations lead to observable gait deficits in individuals with Parkinson's disease (PD) who suffer from high trait anxiety levels. The specific characteristics of gait that are affected appear to be similar to behaviors observed while walking during a dual-task (DT) condition. Yet, it remains unclear whether anxiety is similar to a cognitive load. If it were, then those with PD who have high trait anxiety might be expected to be more susceptible to DT interference during walking. Thus, the overall aim of this study was to evaluate whether trait anxiety influences gait during single-task (ST) and DT walking. Seventy participants (high-anxiety PD [HA-PD], N=26; low-anxiety PD [LA-PD], N=26; healthy control [HC], N=18) completed three ST and three DT walking trials on a data-collecting carpet. The secondary task consisted of digit monitoring while walking. Results showed that during both ST and DT gait, the HA-PD group demonstrated significant reductions in walking speed and step length, as well as increased step length variability and step time variability compared with healthy controls and the LA-PD group. Notably, ST walking in the HA-PD group resembled (i.e., it was not significantly different from) the gait behaviors seen during a DT in the LA-PD and HC groups. These results suggest that trait anxiety may consume processing resources and limit the ability to compensate for gait impairments in PD.

  11. Concurrent generation of multivariate mixed data with variables of dissimilar types.

    PubMed

    Amatya, Anup; Demirtas, Hakan

    2016-01-01

    Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.

  12. Continuation Power Flow with Variable-Step Variable-Order Nonlinear Predictor

    NASA Astrophysics Data System (ADS)

    Kojima, Takayuki; Mori, Hiroyuki

    This paper proposes a new continuation power flow calculation method for drawing a P-V curve in power systems. The continuation power flow calculation successively evaluates power flow solutions through changing a specified value of the power flow calculation. In recent years, power system operators are quite concerned with voltage instability due to the appearance of deregulated and competitive power markets. The continuation power flow calculation plays an important role to understand the load characteristics in a sense of static voltage instability. In this paper, a new continuation power flow with a variable-step variable-order (VSVO) nonlinear predictor is proposed. The proposed method evaluates optimal predicted points confirming with the feature of P-V curves. The proposed method is successfully applied to IEEE 118-bus and IEEE 300-bus systems.

  13. Prediction of chromatographic relative retention time of polychlorinated biphenyls from the molecular electronegativity distance vector.

    PubMed

    Liu, Shu-Shen; Liu, Yan; Yin, Da-Qian; Wang, Xiao-Dong; Wang, Lian-Sheng

    2006-02-01

    Using the molecular electronegativity distance vector (MEDV) descriptors derived directly from the molecular topological structures, the gas chromatographic relative retention times (RRTs) of 209 polychlorinated biphenyls (PCBs) on the SE-54 stationary phase were predicted. A five-variable regression equation with the correlation coefficient of 0.9964 and the root mean square errors of 0.0152 was developed. The descriptors included in the equation represent degree of chlorination (nCl), nonortho index (Ino), and interactions between three pairs of atom types, i.e., atom groups -C= and -C=, -C= and >C=, -C= and -Cl. It has been proved that the retention times of all 209 PCB congeners can be accurately predicted as long as there are more than 50 calibration compounds. In the same way, the MEDV descriptors are also used to develop the five- or six-variable models of RRTs of PCBs on other 18 stationary phases and the correlation coefficients in both modeling stage and LOO cross-validation step are not lower than 0.99 except two models.

  14. Recent advancements in GRACE mascon regularization and uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Loomis, B. D.; Luthcke, S. B.

    2017-12-01

    The latest release of the NASA Goddard Space Flight Center (GSFC) global time-variable gravity mascon product applies a new regularization strategy along with new methods for estimating noise and leakage uncertainties. The critical design component of mascon estimation is the construction of the applied regularization matrices, and different strategies exist between the different centers that produce mascon solutions. The new approach from GSFC directly applies the pre-fit Level 1B inter-satellite range-acceleration residuals in the design of time-dependent regularization matrices, which are recomputed at each step of our iterative solution method. We summarize this new approach, demonstrating the simultaneous increase in recovered time-variable gravity signal and reduction in the post-fit inter-satellite residual magnitudes, until solution convergence occurs. We also present our new approach for estimating mascon noise uncertainties, which are calibrated to the post-fit inter-satellite residuals. Lastly, we present a new technique for end users to quickly estimate the signal leakage errors for any selected grouping of mascons, and we test the viability of this leakage assessment procedure on the mascon solutions produced by other processing centers.

  15. Gait variability in community dwelling adults with Alzheimer disease.

    PubMed

    Webster, Kate E; Merory, John R; Wittwer, Joanne E

    2006-01-01

    Studies have shown that measures of gait variability are associated with falling in older adults. However, few studies have measured gait variability in people with Alzheimer disease, despite the high incidence of falls in Alzheimer disease. The purpose of this study was to compare gait variability of community-dwelling older adults with Alzheimer disease and control subjects at various walking speeds. Ten subjects with mild-moderate Alzheimer disease and ten matched control subjects underwent gait analysis using an electronic walkway. Participants were required to walk at self-selected slow, preferred, and fast speeds. Stride length and step width variability were determined using the coefficient of variation. Results showed that stride length variability was significantly greater in the Alzheimer disease group compared with the control group at all speeds. In both groups, increases in walking speed were significantly correlated with decreases in stride length variability. Step width variability was significantly reduced in the Alzheimer disease group compared with the control group at slow speed only. In conclusion, there is an increase in stride length variability in Alzheimer disease at all walking speeds that may contribute to the increased incidence of falls in Alzheimer disease.

  16. Some critical issues in the characterization of nanoscale thermal conductivity by molecular dynamics analysis

    NASA Astrophysics Data System (ADS)

    Ehsan Khaled, Mohammad; Zhang, Liangchi; Liu, Weidong

    2018-07-01

    The nanoscale thermal conductivity of a material can be significantly different from its value at the macroscale. Although a number of studies using the equilibrium molecular dynamics (EMD) with Green–Kubo (GK) formula have been conducted for nano-conductivity predictions, there are many problems in the analysis that have made the EMD results unreliable or misleading. This paper aims to clarify such critical issues through a thorough investigation on the effect and determination of the vital physical variables in the EMD-GK analysis, using the prediction of the nanoscale thermal conductivity of Si as an example. The study concluded that to have a reliable prediction, quantum correction, time step, simulation time, correlation time and system size are all crucial.

  17. Strategies for Interactive Visualization of Large Scale Climate Simulations

    NASA Astrophysics Data System (ADS)

    Xie, J.; Chen, C.; Ma, K.; Parvis

    2011-12-01

    With the advances in computational methods and supercomputing technology, climate scientists are able to perform large-scale simulations at unprecedented resolutions. These simulations produce data that are time-varying, multivariate, and volumetric, and the data may contain thousands of time steps with each time step having billions of voxels and each voxel recording dozens of variables. Visualizing such time-varying 3D data to examine correlations between different variables thus becomes a daunting task. We have been developing strategies for interactive visualization and correlation analysis of multivariate data. The primary task is to find connection and correlation among data. Given the many complex interactions among the Earth's oceans, atmosphere, land, ice and biogeochemistry, and the sheer size of observational and climate model data sets, interactive exploration helps identify which processes matter most for a particular climate phenomenon. We may consider time-varying data as a set of samples (e.g., voxels or blocks), each of which is associated with a vector of representative or collective values over time. We refer to such a vector as a temporal curve. Correlation analysis thus operates on temporal curves of data samples. A temporal curve can be treated as a two-dimensional function where the two dimensions are time and data value. It can also be treated as a point in the high-dimensional space. In this case, to facilitate effective analysis, it is often necessary to transform temporal curve data from the original space to a space of lower dimensionality. Clustering and segmentation of temporal curve data in the original or transformed space provides us a way to categorize and visualize data of different patterns, which reveals connection or correlation of data among different variables or at different spatial locations. We have employed the power of GPU to enable interactive correlation visualization for studying the variability and correlations of a single or a pair of variables. It is desired to create a succinct volume classification that summarizes the connection among all correlation volumes with respect to various reference locations. Providing a reference location must correspond to a voxel position, the number of correlation volumes equals the total number of voxels. A brute-force solution takes all correlation volumes as the input and classifies their corresponding voxels according to their correlation volumes' distance. For large-scale time-varying multivariate data, calculating all these correlation volumes on-the-fly and analyzing the relationships among them is not feasible. We have developed a sampling-based approach for volume classification in order to reduce the computation cost of computing the correlation volumes. Users are able to employ their domain knowledge in selecting important samples. The result is a static view that captures the essence of correlation relationships; i.e., for all voxels in the same cluster, their corresponding correlation volumes are similar. This sampling-based approach enables us to obtain an approximation of correlation relations in a cost-effective manner, thus leading to a scalable solution to investigate large-scale data sets. These techniques empower climate scientists to study large data from their simulations.

  18. Correlation of USMLE Step 1 scores with performance on dermatology in-training examinations.

    PubMed

    Fening, Katherine; Vander Horst, Anthony; Zirwas, Matthew

    2011-01-01

    Although United States Medical Licensing Examination (USMLE) Step 1 was not designed to predict resident performance, scores are used to compare residency applicants. Multiple studies have displayed a significant correlation among Step 1 scores, in-training examination (ITE) scores, and board passage, although no such studies have been performed in dermatology. The purpose of this study is to determine if this correlation exists in dermatology, and how much of the variability in ITE scores is a result of differences in Step 1 scores. This study also seeks to determine if it is appropriate to individualize expectations for resident ITE performance. This project received institutional review board exemption. From 5 dermatology residency programs (86 residents), we collected Step 1 and ITE scores for each of the 3 years of dermatology residency, and recorded passage/failure on boards. Bivariate Pearson correlation analysis was used to assess correlation between USMLE and ITE scores. Ordinary least squares regression was computed to determine how much USMLE scores contribute to ITE variability. USMLE and ITE score correlations were highly significant (P < .001). Correlation coefficients with USMLE were: 0.467, 0.541, and 0.527 for ITE in years 1, 2, and 3, respectively. Variability in ITE scores caused by differences in USMLE scores were: ITE first-year residency = 21.8%, ITE second-year residency = 29.3%, and ITE third-year residency = 27.8%. This study had a relatively small sample size, with data from only 5 programs. There is a moderate correlation between USMLE and ITE scores, with USMLE scores explaining ∼26% of the variability in ITE scores. Copyright © 2009 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  19. Status of whitebarkpine in the Greater Yellowstone Ecosystem: A step-trend analysis comparing 2004-2007 to 2008-2011

    USGS Publications Warehouse

    Shanahan, Erin; Irvine, Kathryn M.; Roberts, Dave; Litt, Andrea R.; Legg, Kristin; Daley, Rob; Chambers, Nina

    2014-01-01

    Whitebark pine (Pinus albicaulis) is a foundation and keystone species in upper subalpine environments of the northern Rocky Mountains that strongly influences the biodiversity and productivity of high-elevation ecosystems (Tomback et al. 2001, Ellison et al. 2005). Throughout its historic range, whitebark pine has decreased significantly as a major component of high-elevation forests. As a result, it is critical to understand the challenges to whitebark pine—not only at the tree and stand level, but also as these factors influence the distribution of whitebark pine across the Greater Yellowstone Ecosystem (GYE). In 2003, the National Park Service (NPS) Greater Yellowstone Inventory & Monitoring Network identified whitebark pine as one of twelve significant natural resource indicators or vital signs to monitor (Jean et al. 2005, Fancy et al. 2009) and initiated a long-term, collaborative monitoring program. Partners in this effort include the U.S. Geological Survey, U.S. Forest Service, and Montana State University with representatives from each comprising the Greater Yellowstone Whitebark Pine Monitoring Working Group. The objectives of the monitoring program are to assess trends in (1) the proportion of live, whitebark pine trees (>1.4-m tall) infected with white pine blister rust (blister rust); (2) to document blister rust infection severity by the occurrence and location of persisting and new infections; (3) to determine mortality of whitebark pine trees and describe potential factors contributing to the death of trees; and (4) to assess the multiple components of the recruitment of understory whitebark pine into the reproductive population. In this report we summarize the past eight years (2004-2011) of whitebark pine status and trend monitoring in the GYE. Our study area encompasses six national forests (NF), two national parks (NP), as well as state and private lands in portions of Wyoming, Montana, and Idaho; this area is collectively described as the GYE here and in other studies. The sampling design is a probabilistic, twostage cluster design with stands of whitebark pine as the primary units and 10x50 m belt transects as the secondary units. Primary sampling units (stands) were selected randomly from a sample frame of approximately 10,770 mapped pure and mixed whitebark pine stands ≥2.0 hectares in the GYE (Dixon 1997, Landenburger 2012). From 2004 through 2007 (monitoring transect establishment or initial time-step), we established 176 permanent belt transects (secondary sampling units=176) in 150 whitebark pine stands and permanently marked approximately 4,740 individual trees >1.4 m tall to monitor long-term changes in blister rust infection and survival rates. Between 2008 and 2011 (revisit time-step), these same 176 transects were surveyed and again all previously tagged trees were observed for changes in blister rust infection and survival status. Objective 1. Using a combined ratio estimator, we estimated the proportion of live trees infected in the GYE in the initial time-step (2004-2007) to be 0.22 (0.031 SE). Following the completion of all surveys in the revisit time-step (2008-2011), we estimated the proportion of live trees infected with white pine blister rust as 0.23 (0.028 SE; Table 2). We detected no significant change in the proportion of trees infected in the GYE between the two time-steps. Objective 2. We documented blister rust canker locations as occurring in the canopy or bole. We compared changes in canker position between the initial time-step (2004-2007) and the revisit time-step (2008-2011) in order to assess changes in infection severity. This analysis included the 3,795 trees tagged during the initial time-step that were located and documented as alive at the end of the revisit time-step. At the end of the revisit time-step, we found 1,217 trees infected with blister rust. This includes the 287 newly tagged trees in the revisit time step of which 14 had documented infections. Of these 1,217 trees, 780 trees were infected with blister rust in both time steps. Trees with only canopy cankers made up approximately 43% (519 trees) of the total number of trees infected with blister rust at the end of the revisit time-step, while trees with only bole cankers comprised 20% (252 trees), and those with both canopy and bole cankers included 37% (446 trees) of the infected sample. A bole infection is considered to be more consequential than a canopy canker, as it compromises not only the overall longevity of the tree, but its functional capacity for reproductive output as well (Kendall and Arno 1990, Campbell and Antos 2000, McDonald and Hoff 2001, Schwandt and Kegley 2004). In addition to infection location, we also documented infection transition between the canopy and bole. Of the 780 live trees that were infected with blister rust in both time-steps, approximately 31% (242) maintained canopy cankers and 36% (281) retained bole infections at the end of the revisit time-step. Infection transition from canopy to bole occurred in 30% (234) of the revisit time-step trees while 3% (23) transitioned from bole to canopy infections during this period. Objective 3. To determine whitebark pine mortality, we resurveyed all belt transects to reassess the life status of permanently tagged trees >1.4 m tall. We compared the total number of live tagged trees recorded during monitoring transect establishment to the total number of resurveyed dead tagged trees recorded during the revisit time-step and identified all potential mortality-influencing conditions (blister rust, mountain pine beetle, fire and other). By the end of the revisit time-step, we observed a total of 975 dead tagged whitebark pine trees; using a ratio estimator, this represents a loss of approximately 20% (SE=4.35%) of the original live tagged tree population (GYWPMWG 2012). Objective 4. To investigate the proportion of live, reproducing tagged trees, we divided the total number of positively identified cone-bearing trees by the total number of live trees in the tagged tree sample at the end of the revisit time-step. To approximate the average density of recruitment trees per stand, trees ≤1.4 m tall were summed by stand (within the 500 m² transect area) and divided by the total number of stands. Reproducing trees made up approximately 24% (996 trees) of the total live tagged population at the end of the revisit time-step. Differentiating between whitebark pine and limber pine seedlings or saplings is problematic given the absence of cones or cone scars. Therefore, understory summaries as presented in this report may include individuals of both species when they are sympatric in a stand. The average density of small trees ≤1.4 m tall was 53 understory trees per 500 m². Raw counts of these understory individuals ranged from 0-635 small trees per belt transect. In addition, a total of 287 trees were added to the tagged tree population by the end of 2011. These newly tagged trees were individuals that upon subsequent revisits had reached a height of >1.4 m tall and subsequently added to the sample. Throughout the past decade in the GYE, monitoring has helped document shifts in whitebark pine forests; whitebark pine stands have been impacted by insect, pathogen, wildland fire, and other disturbance events. Blister rust infection is ubiquitous throughout the ecosystem and infection proportions are variable across the region. And while we have documented mortality of whitebark pine, we have also recorded considerable recruitment. We provide this first step-trend report as a quantifiable baseline for understanding the state of whitebark pine in the GYE. Many aspects of whitebark pine health are highly variable across the range of its distribution in the GYE. Through sustained implementation of the monitoring program, we will continue efforts to document and quantify whitebark pine forest dynamics as they arise under periodic upsurges in insect, pathogen, fire episodes, and climatic events in the GYE. Since its inception, this monitoring program perseveres as one of the only sustained longterm efforts conducted in the GYE with a singular purpose to track the health and status of this prominent keystone species.

  20. Executive and Attention Functioning Among Children in the Pandas Subgroup

    PubMed Central

    Hirschtritt, Matthew E.; Hammond, Christopher J.; Luckenbaugh, David; Buhle, Jason; Thurm, Audrey E.; Casey, B. J.; Swedo, Susan E.

    2009-01-01

    Evidence from past studies indicates that adults and children with Obsessive-Compulsive Disorder (OCD) and Tourette syndrome (TS) experience subtle neuropsychological deficits. Less is known about neuropsychological functioning of children and adolescents with a symptom course consistent with the PANDAS (Pediatric Autoimmune Neuropsychiatric Disorders Associated with Streptococcal infection) subgroup of OCD and tics. To provide such information, we administered three tests of attention control and two of executive function to 67 children and adolescents (ages 5–16) diagnosed with OCD and/or tics and a symptom course consistent with the PANDAS subgroup and 98 healthy volunteers (HV) matched by age, sex, and IQ. In a paired comparison of the two groups, the PANDAS subjects were less accurate than HV in a test of response suppression. Further, in a two-step linear regression analysis of the PANDAS group in which clinical variables were added stepwise into the model and in the second step matching variables (age, sex, and IQ) were added, IQ emerged as a predictor of performance on this task. In the same analysis, ADHD diagnosis and age emerged as predictors of response time in a continuous performance task. Subdividing the PANDAS group by primary psychiatric diagnosis revealed that subjects with TS or OCD with tics exhibited a longer response time compared to controls than subjects with OCD only, replicating previous findings within TS and OCD. This study demonstrates that children with PANDAS exhibit neuropsychological profiles similar to those of their primary psychiatric diagnosis. PMID:18622810

Top