Feedback controlled optics with wavefront compensation
NASA Technical Reports Server (NTRS)
Breckenridge, William G. (Inventor); Redding, David C. (Inventor)
1993-01-01
The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.
Schultze, A E; Irizarry, A R
2017-02-01
Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom
2016-01-01
Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332
Longitudinal Growth Curves of Brain Function Underlying Inhibitory Control through Adolescence
Foran, William; Velanova, Katerina; Luna, Beatriz
2013-01-01
Neuroimaging studies suggest that developmental improvements in inhibitory control are primarily supported by changes in prefrontal executive function. However, studies are contradictory with respect to how activation in prefrontal regions changes with age, and they have yet to analyze longitudinal data using growth curve modeling, which allows characterization of dynamic processes of developmental change, individual differences in growth trajectories, and variables that predict any interindividual variability in trajectories. In this study, we present growth curves modeled from longitudinal fMRI data collected over 302 visits (across ages 9 to 26 years) from 123 human participants. Brain regions within circuits known to support motor response control, executive control, and error processing (i.e., aspects of inhibitory control) were investigated. Findings revealed distinct developmental trajectories for regions within each circuit and indicated that a hierarchical pattern of maturation of brain activation supports the gradual emergence of adult-like inhibitory control. Mean growth curves of activation in motor response control regions revealed no changes with age, although interindividual variability decreased with development, indicating equifinality with maturity. Activation in certain executive control regions decreased with age until adolescence, and variability was stable across development. Error-processing activation in the dorsal anterior cingulate cortex showed continued increases into adulthood and no significant interindividual variability across development, and was uniquely associated with task performance. These findings provide evidence that continued maturation of error-processing abilities supports the protracted development of inhibitory control over adolescence, while motor response control regions provide early-maturing foundational capacities and suggest that some executive control regions may buttress immature networks as error processing continues to mature. PMID:24227721
Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George
2018-01-01
Abstract Background Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. Methods We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Results Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Conclusions Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present. PMID:29088358
Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George
2018-04-01
Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.
Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N
2018-05-01
Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.
Johnson, Reva E; Kording, Konrad P; Hargrove, Levi J; Sensinger, Jonathon W
2017-06-01
In this paper we asked the question: if we artificially raise the variability of torque control signals to match that of EMG, do subjects make similar errors and have similar uncertainty about their movements? We answered this question using two experiments in which subjects used three different control signals: torque, torque+noise, and EMG. First, we measured error on a simple target-hitting task in which subjects received visual feedback only at the end of their movements. We found that even when the signal-to-noise ratio was equal across EMG and torque+noise control signals, EMG resulted in larger errors. Second, we quantified uncertainty by measuring the just-noticeable difference of a visual perturbation. We found that for equal errors, EMG resulted in higher movement uncertainty than both torque and torque+noise. The differences suggest that performance and confidence are influenced by more than just the noisiness of the control signal, and suggest that other factors, such as the user's ability to incorporate feedback and develop accurate internal models, also have significant impacts on the performance and confidence of a person's actions. We theorize that users have difficulty distinguishing between random and systematic errors for EMG control, and future work should examine in more detail the types of errors made with EMG control.
Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor
2011-05-14
In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Noise-Enhanced Eversion Force Sense in Ankles With or Without Functional Instability.
Ross, Scott E; Linens, Shelley W; Wright, Cynthia J; Arnold, Brent L
2015-08-01
Force sense impairments are associated with functional ankle instability. Stochastic resonance stimulation (SRS) may have implications for correcting these force sense deficits. To determine if SRS improved force sense. Case-control study. Research laboratory. Twelve people with functional ankle instability (age = 23 ± 3 years, height = 174 ± 8 cm, mass = 69 ± 10 kg) and 12 people with stable ankles (age = 22 ± 2 years, height = 170 ± 7 cm, mass = 64 ± 10 kg). The eversion force sense protocol required participants to reproduce a targeted muscle tension (10% of maximum voluntary isometric contraction). This protocol was assessed under SRSon and SRSoff (control) conditions. During SRSon, random subsensory mechanical noise was applied to the lower leg at a customized optimal intensity for each participant. Constant error, absolute error, and variable error measures quantified accuracy, overall performance, and consistency of force reproduction, respectively. With SRS, we observed main effects for force sense absolute error (SRSoff = 1.01 ± 0.67 N, SRSon = 0.69 ± 0.42 N) and variable error (SRSoff = 1.11 ± 0.64 N, SRSon = 0.78 ± 0.56 N) (P < .05). No other main effects or treatment-by-group interactions were found (P > .05). Although SRS reduced the overall magnitude (absolute error) and variability (variable error) of force sense errors, it had no effect on the directionality (constant error). Clinically, SRS may enhance muscle tension ability, which could have treatment implications for ankle stability.
Does the brain use sliding variables for the control of movements?
Hanneton, S; Berthoz, A; Droulez, J; Slotine, J J
1997-12-01
Delays in the transmission of sensory and motor information prevent errors from being instantaneously available to the central nervous system (CNS) and can reduce the stability of a closed-loop control strategy. On the other hand, the use of a pure feedforward control (inverse dynamics) requires a perfect knowledge of the dynamic behavior of the body and of manipulated objects. Sensory feedback is essential both to accommodate unexpected errors and events and to compensate for uncertainties about the dynamics of the body. Experimental observations concerning the control of posture, gaze and limbs have shown that the CNS certainly uses a combination of closed-loop and open-loop control. Feedforward components of movement, such as eye saccades, occur intermittently and present a stereotyped kinematic profile. In visuo-manual tracking tasks, hand movements exhibit velocity peaks that occur intermittently. When a delay or a slow dynamics are inserted in the visuo-manual control loop, intermittent step-and-hold movements appear clearly in the hand trajectory. In this study, we investigated strategies used by human subjects involved in the control of a particular dynamic system. We found strong evidence for substantial nonlinearities in the commands produced. The presence of step-and-hold movements seemed to be the major source of nonlinearities in the control loop. Furthermore, the stereotyped ballistic-like kinematics of these rapid and corrective movements suggests that they were produced in an open-loop way by the CNS. We analyzed the generation of ballistic movements in the light of sliding control theory assuming that they occurred when a sliding variable exceeded a constant threshold. In this framework, a sliding variable is defined as a composite variable (a combination of the instantaneous tracking error and its temporal derivatives) that fulfills a specific stability criterion. Based on this hypothesis and on the assumption of a constant reaction time, the tracking error and its derivatives should be correlated at a particular time lag before movement onset. A peak of correlation was found for a physiologically plausible reaction time, corresponding to a stable composite variable. The direction and amplitude of the ongoing stereotyped movements seemed also be adjusted in order to minimize this variable. These findings suggest that, during visually guided movements, human subjects attempt to minimize such a composite variable and not the instantaneous error. This minimization seems to be obtained by the execution of stereotyped corrective movements.
NASA Astrophysics Data System (ADS)
Cao, Lu; Li, Hengnian
2016-10-01
For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).
Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H
2009-09-01
Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
Topographic analysis of individual activation patterns in medial frontal cortex in schizophrenia
Stern, Emily R.; Welsh, Robert C.; Fitzgerald, Kate D.; Taylor, Stephan F.
2009-01-01
Individual variability in the location of neural activations poses a unique problem for neuroimaging studies employing group averaging techniques to investigate the neural bases of cognitive and emotional functions. This may be especially challenging for studies examining patient groups, which often have limited sample sizes and increased intersubject variability. In particular, medial frontal cortex (MFC) dysfunction is thought to underlie performance monitoring dysfunction among patients with previous studies using group averaging to have yielded conflicting results. schizophrenia, yet compare schizophrenic patients to controls To examine individual activations in MFC associated with two aspects of performance monitoring, interference and error processing, functional magnetic resonance imaging (fMRI) data were acquired while 17 patients with schizophrenia and 21 healthy controls performed an event-related version of the multi-source interference task. Comparisons of averaged data revealed few differences between the groups. By contrast, topographic analysis of individual activations for errors showed that control subjects exhibited activations spanning across both posterior and anterior regions of MFC while patients primarily activated posterior MFC, possibly reflecting an impaired emotional response to errors in schizophrenia. This discrepancy between topographic and group-averaged results may be due to the significant dispersion among individual activations, particularly among healthy controls, highlighting the importance of considering intersubject variability when interpreting the medial frontal response to error commission. PMID:18819107
Chieffi, Sergio; Messina, Giovanni; Messina, Antonietta; Villano, Ines; Monda, Vincenzo; Ambra, Ferdinando Ivano; Garofalo, Elisabetta; Romano, Felice; Mollica, Maria Pina; Monda, Marcellino; Iavarone, Alessandro
2017-01-01
Previous studies suggested that the occipitoparietal stream orients attention toward the near/lower space and is involved in immediate reaching, whereas the occipitotemporal stream orients attention toward the far/upper space and is involved in delayed reaching. In the present study, we investigated the role of the occipitotemporal stream in attention orienting and delayed reaching in a patient (GP) with bilateral damage to the occipitoparietal areas and optic ataxia. GP and healthy controls took part in three experiments. In the experiment 1, the participants bisected lines oriented along radial, vertical, and horizontal axes. GP bisected radial lines farther, and vertical lines more above, than the controls, consistent with an attentional bias toward the far/upper space and near/lower space neglect. The experiment 2 consisted of two tasks: (1) an immediate reaching task, in which GP reached target locations under visual control and (2) a delayed visual reaching task, in which GP and controls were asked to reach remembered target locations visually presented. We measured constant and variable distance and direction errors. In immediate reaching task, GP accurately reached target locations. In delayed reaching task, GP overshot remembered target locations, whereas the controls undershot them. Furthermore, variable errors were greater in GP than in the controls. In the experiment 3, GP and controls performed a delayed proprioceptive reaching task. Constant reaching errors did not differ between GP and the controls. However, variable direction errors were greater in GP than in the controls. We suggest that the occipitoparietal damage, and the relatively intact occipitotemporal region, produced in GP an attentional orienting bias toward the far/upper space (experiment 1). In turns, the attentional bias selectively shifted toward the far space remembered visual (experiment 2), but not proprioceptive (experiment 3), target locations. As a whole, these findings further support the hypothesis of an involvement of the occipitotemporal stream in delayed reaching. Furthermore, the observation that in both delayed reaching tasks the variable errors were greater in GP than in the controls suggested that in optic ataxia is present not only a visuo- but also a proprioceptivo-motor integration deficit. PMID:28620345
Dimensional control of die castings
NASA Astrophysics Data System (ADS)
Karve, Aniruddha Ajit
The demand for net shape die castings, which require little or no machining, is steadily increasing. Stringent customer requirements are forcing die casters to deliver high quality castings in increasingly short lead times. Dimensional conformance to customer specifications is an inherent part of die casting quality. The dimensional attributes of a die casting are essentially dependent upon many factors--the quality of the die and the degree of control over the process variables being the two major sources of dimensional error in die castings. This study focused on investigating the nature and the causes of dimensional error in die castings. The two major components of dimensional error i.e., dimensional variability and die allowance were studied. The major effort of this study was to qualitatively and quantitatively study the effects of casting geometry and process variables on die casting dimensional variability and die allowance. This was accomplished by detailed dimensional data collection at production die casting sites. Robust feature characterization schemes were developed to describe complex casting geometry in quantitative terms. Empirical modeling was utilized to quantify the effects of the casting variables on dimensional variability and die allowance for die casting features. A number of casting geometry and process variables were found to affect dimensional variability in die castings. The dimensional variability was evaluated by comparisons with current published dimensional tolerance standards. The casting geometry was found to play a significant role in influencing the die allowance of the features measured. The predictive models developed for dimensional variability and die allowance were evaluated to test their effectiveness. Finally, the relative impact of all the components of dimensional error in die castings was put into perspective, and general guidelines for effective dimensional control in the die casting plant were laid out. The results of this study will contribute to enhancement of dimensional quality and lead time compression in the die casting industry, thus making it competitive with other net shape manufacturing processes.
Effects of robotically modulating kinematic variability on motor skill learning and motivation
Reinkensmeyer, David J.
2015-01-01
It is unclear how the variability of kinematic errors experienced during motor training affects skill retention and motivation. We used force fields produced by a haptic robot to modulate the kinematic errors of 30 healthy adults during a period of practice in a virtual simulation of golf putting. On day 1, participants became relatively skilled at putting to a near and far target by first practicing without force fields. On day 2, they warmed up at the task without force fields, then practiced with force fields that either reduced or augmented their kinematic errors and were finally assessed without the force fields active. On day 3, they returned for a long-term assessment, again without force fields. A control group practiced without force fields. We quantified motor skill as the variability in impact velocity at which participants putted the ball. We quantified motivation using a self-reported, standardized scale. Only individuals who were initially less skilled benefited from training; for these people, practicing with reduced kinematic variability improved skill more than practicing in the control condition. This reduced kinematic variability also improved self-reports of competence and satisfaction. Practice with increased kinematic variability worsened these self-reports as well as enjoyment. These negative motivational effects persisted on day 3 in a way that was uncorrelated with actual skill. In summary, robotically reducing kinematic errors in a golf putting training session improved putting skill more for less skilled putters. Robotically increasing kinematic errors had no performance effect, but decreased motivation in a persistent way. PMID:25673732
Effects of robotically modulating kinematic variability on motor skill learning and motivation.
Duarte, Jaime E; Reinkensmeyer, David J
2015-04-01
It is unclear how the variability of kinematic errors experienced during motor training affects skill retention and motivation. We used force fields produced by a haptic robot to modulate the kinematic errors of 30 healthy adults during a period of practice in a virtual simulation of golf putting. On day 1, participants became relatively skilled at putting to a near and far target by first practicing without force fields. On day 2, they warmed up at the task without force fields, then practiced with force fields that either reduced or augmented their kinematic errors and were finally assessed without the force fields active. On day 3, they returned for a long-term assessment, again without force fields. A control group practiced without force fields. We quantified motor skill as the variability in impact velocity at which participants putted the ball. We quantified motivation using a self-reported, standardized scale. Only individuals who were initially less skilled benefited from training; for these people, practicing with reduced kinematic variability improved skill more than practicing in the control condition. This reduced kinematic variability also improved self-reports of competence and satisfaction. Practice with increased kinematic variability worsened these self-reports as well as enjoyment. These negative motivational effects persisted on day 3 in a way that was uncorrelated with actual skill. In summary, robotically reducing kinematic errors in a golf putting training session improved putting skill more for less skilled putters. Robotically increasing kinematic errors had no performance effect, but decreased motivation in a persistent way. Copyright © 2015 the American Physiological Society.
Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George
2013-01-01
The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101
Trial-to-trial adaptation in control of arm reaching and standing posture
Pienciak-Siewert, Alison; Horan, Dylan P.
2016-01-01
Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. PMID:27683888
Trial-to-trial adaptation in control of arm reaching and standing posture.
Pienciak-Siewert, Alison; Horan, Dylan P; Ahmed, Alaa A
2016-12-01
Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. Copyright © 2016 the American Physiological Society.
State-of-the-Art pH Electrode Quality Control for Measurements of Acidic, Low Ionic Strength Waters.
ERIC Educational Resources Information Center
Stapanian, Martin A.; Metcalf, Richard C.
1990-01-01
Described is the derivation of the relationship between the pH measurement error and the resulting percentage error in hydrogen ion concentration including the use of variable activity coefficients. The relative influence of the ionic strength of the solution on the percentage error is shown. (CW)
Therrien, Amanda S; Wolpert, Daniel M; Bastian, Amy J
2016-01-01
Reinforcement and error-based processes are essential for motor learning, with the cerebellum thought to be required only for the error-based mechanism. Here we examined learning and retention of a reaching skill under both processes. Control subjects learned similarly from reinforcement and error-based feedback, but showed much better retention under reinforcement. To apply reinforcement to cerebellar patients, we developed a closed-loop reinforcement schedule in which task difficulty was controlled based on recent performance. This schedule produced substantial learning in cerebellar patients and controls. Cerebellar patients varied in their learning under reinforcement but fully retained what was learned. In contrast, they showed complete lack of retention in error-based learning. We developed a mechanistic model of the reinforcement task and found that learning depended on a balance between exploration variability and motor noise. While the cerebellar and control groups had similar exploration variability, the patients had greater motor noise and hence learned less. Our results suggest that cerebellar damage indirectly impairs reinforcement learning by increasing motor noise, but does not interfere with the reinforcement mechanism itself. Therefore, reinforcement can be used to learn and retain novel skills, but optimal reinforcement learning requires a balance between exploration variability and motor noise. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain.
Therrien, Amanda S.; Wolpert, Daniel M.
2016-01-01
Abstract See Miall and Galea (doi: 10.1093/awv343 ) for a scientific commentary on this article. Reinforcement and error-based processes are essential for motor learning, with the cerebellum thought to be required only for the error-based mechanism. Here we examined learning and retention of a reaching skill under both processes. Control subjects learned similarly from reinforcement and error-based feedback, but showed much better retention under reinforcement. To apply reinforcement to cerebellar patients, we developed a closed-loop reinforcement schedule in which task difficulty was controlled based on recent performance. This schedule produced substantial learning in cerebellar patients and controls. Cerebellar patients varied in their learning under reinforcement but fully retained what was learned. In contrast, they showed complete lack of retention in error-based learning. We developed a mechanistic model of the reinforcement task and found that learning depended on a balance between exploration variability and motor noise. While the cerebellar and control groups had similar exploration variability, the patients had greater motor noise and hence learned less. Our results suggest that cerebellar damage indirectly impairs reinforcement learning by increasing motor noise, but does not interfere with the reinforcement mechanism itself. Therefore, reinforcement can be used to learn and retain novel skills, but optimal reinforcement learning requires a balance between exploration variability and motor noise. PMID:26626368
USDA-ARS?s Scientific Manuscript database
An aerial variable-rate application system consisting of a DGPS-based guidance system, automatic flow controller, and hydraulically controlled pump/valve was evaluated for response time to rapidly changing flow requirements and accuracy of application. Spray deposition position error was evaluated ...
Carrez, Laurent; Bouchoud, Lucie; Fleury-Souverain, Sandrine; Combescure, Christophe; Falaschi, Ludivine; Sadeghipour, Farshid; Bonnabry, Pascal
2017-03-01
Background and objectives Centralized chemotherapy preparation units have established systematic strategies to avoid errors. Our work aimed to evaluate the accuracy of manual preparations associated with different control methods. Method A simulation study in an operational setting used phenylephrine and lidocaine as markers. Each operator prepared syringes that were controlled using a different method during each of three sessions (no control, visual double-checking, and gravimetric control). Eight reconstitutions and dilutions were prepared in each session, with variable doses and volumes, using different concentrations of stock solutions. Results were analyzed according to qualitative (choice of stock solution) and quantitative criteria (accurate, <5% deviation from the target concentration; weakly accurate, 5%-10%; inaccurate, 10%-30%; wrong, >30% deviation). Results Eleven operators carried out 19 sessions. No final preparation (n = 438) contained a wrong drug. The protocol involving no control failed to detect 1 of 3 dose errors made and double-checking failed to detect 3 of 7 dose errors. The gravimetric control method detected all 5 out of 5 dose errors. The accuracy of the doses measured was equivalent across the control methods ( p = 0.63 Kruskal-Wallis). The final preparations ranged from 58% to 60% accurate, 25% to 27% weakly accurate, 14% to 17% inaccurate and 0.9% wrong. A high variability was observed between operators. Discussion Gravimetric control was the only method able to detect all dose errors, but it did not improve dose accuracy. A dose accuracy with <5% deviation cannot always be guaranteed using manual production. Automation should be considered in the future.
NASA Astrophysics Data System (ADS)
Fletcher, S. J.; Kleist, D.; Ide, K.
2017-12-01
As the resolution of operational global numerical weather prediction system approach the meso-scale, then the assumption of Gaussianity for the errors at these scales may not valid. However, it is also true that synoptic variables that are positive definite in behavior, for example humidity, cannot be optimally analyzed with a Gaussian error structure, where the increment could force the full field to go negative. In this presentation we present the initial work of implementing a mixed Gaussian-lognormal approximation for the temperature and moisture variable in both the ensemble and variational component of the NCEP GSI hybrid EnVAR. We shall also lay the foundation for the implementation of the lognormal approximation to cloud related control variables to allow for a possible more consistent assimilation of cloudy radiances.
Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan
To evaluate the cost-effectiveness of an automated medication system (AMS) implemented in a Danish hospital setting. An economic evaluation was performed alongside a controlled before-and-after effectiveness study with one control ward and one intervention ward. The primary outcome measure was the number of errors in the medication administration process observed prospectively before and after implementation. To determine the difference in proportion of errors after implementation of the AMS, logistic regression was applied with the presence of error(s) as the dependent variable. Time, group, and interaction between time and group were the independent variables. The cost analysis used the hospital perspective with a short-term incremental costing approach. The total 6-month costs with and without the AMS were calculated as well as the incremental costs. The number of avoided administration errors was related to the incremental costs to obtain the cost-effectiveness ratio expressed as the cost per avoided administration error. The AMS resulted in a statistically significant reduction in the proportion of errors in the intervention ward compared with the control ward. The cost analysis showed that the AMS increased the ward's 6-month cost by €16,843. The cost-effectiveness ratio was estimated at €2.01 per avoided administration error, €2.91 per avoided procedural error, and €19.38 per avoided clinical error. The AMS was effective in reducing errors in the medication administration process at a higher overall cost. The cost-effectiveness analysis showed that the AMS was associated with affordable cost-effectiveness rates. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Multi-muscle FES force control of the human arm for arbitrary goals.
Schearer, Eric M; Liao, Yu-Wei; Perreault, Eric J; Tresch, Matthew C; Memberg, William D; Kirsch, Robert F; Lynch, Kevin M
2014-05-01
We present a method for controlling a neuroprosthesis for a paralyzed human arm using functional electrical stimulation (FES) and characterize the errors of the controller. The subject has surgically implanted electrodes for stimulating muscles in her shoulder and arm. Using input/output data, a model mapping muscle stimulations to isometric endpoint forces measured at the subject's hand was identified. We inverted the model of this redundant and coupled multiple-input multiple-output system by minimizing muscle activations and used this inverse for feedforward control. The magnitude of the total root mean square error over a grid in the volume of achievable isometric endpoint force targets was 11% of the total range of achievable forces. Major sources of error were random error due to trial-to-trial variability and model bias due to nonstationary system properties. Because the muscles working collectively are the actuators of the skeletal system, the quantification of errors in force control guides designs of motion controllers for multi-joint, multi-muscle FES systems that can achieve arbitrary goals.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
General Aviation Avionics Statistics.
1980-12-01
designed to produce standard errors on these variables at levels specified by the FAA. No controls were placed on the standard errors of the non-design...Transponder Encoding Requirement. and Mode CAutomatic (11as been deleted) Altitude Reporting Ca- pabili.,; Two-way Radio; VOR or TACAN Receiver. Remaining 42
Which Measures of Online Control Are Least Sensitive to Offline Processes?
de Grosbois, John; Tremblay, Luc
2018-02-28
A major challenge to the measurement of online control is the contamination by offline, planning-based processes. The current study examined the sensitivity of four measures of online control to offline changes in reaching performance induced by prism adaptation and terminal feedback. These measures included the squared Z scores (Z 2 ) of correlations of limb position at 75% movement time versus movement end, variable error, time after peak velocity, and a frequency-domain analysis (pPower). The results indicated that variable error and time after peak velocity were sensitive to the prism adaptation. Furthermore, only the Z 2 values were biased by the terminal feedback. Ultimately, the current study has demonstrated the sensitivity of limb kinematic measures to offline control processes and that pPower analyses may yield the most suitable measure of online control.
Treleaven, Julia; Takasaki, Hiroshi
2015-02-01
Subjective visual vertical (SVV) assesses visual dependence for spacial orientation, via vertical perception testing. Using the computerized rod-and-frame test (CRFT), SVV is thought to be an important measure of cervical proprioception and might be greater in those with whiplash associated disorder (WAD), but to date research findings are inconsistent. The aim of this study was to investigate the most sensitive SVV error measurement to detect group differences between no neck pain control, idiopathic neck pain (INP) and WAD subjects. Cross sectional study. Neck Disability Index (NDI), Dizziness Handicap Inventory short form (DHIsf) and the average constant error (CE), absolute error (AE), root mean square error (RMSE), and variable error (VE) of the SVV were obtained from 142 subjects (48 asymptomatic, 36 INP, 42 WAD). The INP group had significantly (p < 0.03) greater VE and RMSE when compared to both the control and WAD groups. There were no differences seen between the WAD and controls. The results demonstrated that people with INP (not WAD), had an altered strategy for maintaining the perception of vertical by increasing variability of performance. This may be due to the complexity of the task. Further, the SVV performance was not related to reported pain or dizziness handicap. These findings are inconsistent with other measures of cervical proprioception in neck pain and more research is required before the SVV can be considered an important measure and utilized clinically. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu
2017-03-01
In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.
Mitchell, W G; Chavez, J M; Baker, S A; Guzman, B L; Azen, S P
1990-07-01
Maturation of sustained attention was studied in a group of 52 hyperactive elementary school children and 152 controls using a microcomputer-based test formatted to resemble a video game. In nonhyperactive children, both simple and complex reaction time decreased with age, as did variability of response time. Omission errors were extremely infrequent on simple reaction time and decreased with age on the more complex tasks. Commission errors had an inconsistent relationship with age. Hyperactive children were slower, more variable, and made more errors on all segments of the game than did controls. Both motor speed and calculated mental speed were slower in hyperactive children, with greater discrepancy for responses directed to the nondominant hand, suggesting that a selective right hemisphere deficit may be present in hyperactives. A summary score (number of individual game scores above the 95th percentile) of 4 or more detected 60% of hyperactive subjects with a false positive rate of 5%. Agreement with the Matching Familiar Figures Test was 75% in the hyperactive group.
Head repositioning accuracy to neutral: a comparative study of error calculation.
Hill, Robert; Jensen, Pål; Baardsen, Tor; Kulvik, Kristian; Jull, Gwendolen; Treleaven, Julia
2009-02-01
Deficits in cervical proprioception have been identified in subjects with neck pain through the measure of head repositioning accuracy (HRA). Nevertheless there appears to be no general consensus regarding the construct of measurement of error used for calculating HRA. This study investigated four different mathematical methods of measurement of error to determine if there were any differences in their ability to discriminate between a control group and subjects with a whiplash associated disorder. The four methods for measuring cervical joint position error were calculated using a previous data set consisting of 50 subjects with whiplash complaining of dizziness (WAD D), 50 subjects with whiplash not complaining of dizziness (WAD ND) and 50 control subjects. The results indicated that no one measure of HRA uniquely detected or defined the differences between the whiplash and control groups. Constant error (CE) was significantly different between the whiplash and control groups from extension (p<0.05). Absolute errors (AEs) and root mean square errors (RMSEs) demonstrated differences between the two WAD groups in rotation trials (p<0.05). No differences were seen with variable error (VE). The results suggest that a combination of AE (or RMSE) and CE are probably the most suitable measures for analysis of HRA.
NASA Technical Reports Server (NTRS)
Hamer, H. A.; Johnson, K. G.
1986-01-01
An analysis was performed to determine the effects of model error on the control of a large flexible space antenna. Control was achieved by employing two three-axis control-moment gyros (CMG's) located on the antenna column. State variables were estimated by including an observer in the control loop that used attitude and attitude-rate sensors on the column. Errors were assumed to exist in the individual model parameters: modal frequency, modal damping, mode slope (control-influence coefficients), and moment of inertia. Their effects on control-system performance were analyzed either for (1) nulling initial disturbances in the rigid-body modes, or (2) nulling initial disturbances in the first three flexible modes. The study includes the effects on stability, time to null, and control requirements (defined as maximum torque and total momentum), as well as on the accuracy of obtaining initial estimates of the disturbances. The effects on the transients of the undisturbed modes are also included. The results, which are compared for decoupled and linear quadratic regulator (LQR) control procedures, are shown in tabular form, parametric plots, and as sample time histories of modal-amplitude and control responses. Results of the analysis showed that the effects of model errors on the control-system performance were generally comparable for both control procedures. The effect of mode-slope error was the most serious of all model errors.
Synthesis of robust nonlinear autopilots using differential game theory
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1991-01-01
A synthesis technique for handling unmodeled disturbances in nonlinear control law synthesis was advanced using differential game theory. Two types of modeling inaccuracies can be included in the formulation. The first is a bias-type error, while the second is the scale-factor-type error in the control variables. The disturbances were assumed to satisfy an integral inequality constraint. Additionally, it was assumed that they act in such a way as to maximize a quadratic performance index. Expressions for optimal control and worst-case disturbance were then obtained using optimal control theory.
Generalized Background Error covariance matrix model (GEN_BE v2.0)
NASA Astrophysics Data System (ADS)
Descombes, G.; Auligné, T.; Vandenberghe, F.; Barker, D. M.
2014-07-01
The specification of state background error statistics is a key component of data assimilation since it affects the impact observations will have on the analysis. In the variational data assimilation approach, applied in geophysical sciences, the dimensions of the background error covariance matrix (B) are usually too large to be explicitly determined and B needs to be modeled. Recent efforts to include new variables in the analysis such as cloud parameters and chemical species have required the development of the code to GENerate the Background Errors (GEN_BE) version 2.0 for the Weather Research and Forecasting (WRF) community model to allow for a simpler, flexible, robust, and community-oriented framework that gathers methods used by meteorological operational centers and researchers. We present the advantages of this new design for the data assimilation community by performing benchmarks and showing some of the new features on data assimilation test cases. As data assimilation for clouds remains a challenge, we present a multivariate approach that includes hydrometeors in the control variables and new correlated errors. In addition, the GEN_BE v2.0 code is employed to diagnose error parameter statistics for chemical species, which shows that it is a tool flexible enough to involve new control variables. While the generation of the background errors statistics code has been first developed for atmospheric research, the new version (GEN_BE v2.0) can be easily extended to other domains of science and be chosen as a testbed for diagnostic and new modeling of B. Initially developed for variational data assimilation, the model of the B matrix may be useful for variational ensemble hybrid methods as well.
Generalized background error covariance matrix model (GEN_BE v2.0)
NASA Astrophysics Data System (ADS)
Descombes, G.; Auligné, T.; Vandenberghe, F.; Barker, D. M.; Barré, J.
2015-03-01
The specification of state background error statistics is a key component of data assimilation since it affects the impact observations will have on the analysis. In the variational data assimilation approach, applied in geophysical sciences, the dimensions of the background error covariance matrix (B) are usually too large to be explicitly determined and B needs to be modeled. Recent efforts to include new variables in the analysis such as cloud parameters and chemical species have required the development of the code to GENerate the Background Errors (GEN_BE) version 2.0 for the Weather Research and Forecasting (WRF) community model. GEN_BE allows for a simpler, flexible, robust, and community-oriented framework that gathers methods used by some meteorological operational centers and researchers. We present the advantages of this new design for the data assimilation community by performing benchmarks of different modeling of B and showing some of the new features in data assimilation test cases. As data assimilation for clouds remains a challenge, we present a multivariate approach that includes hydrometeors in the control variables and new correlated errors. In addition, the GEN_BE v2.0 code is employed to diagnose error parameter statistics for chemical species, which shows that it is a tool flexible enough to implement new control variables. While the generation of the background errors statistics code was first developed for atmospheric research, the new version (GEN_BE v2.0) can be easily applied to other domains of science and chosen to diagnose and model B. Initially developed for variational data assimilation, the model of the B matrix may be useful for variational ensemble hybrid methods as well.
Variable gain for a wind turbine pitch control
NASA Technical Reports Server (NTRS)
Seidel, R. C.; Birchenough, A. G.
1981-01-01
The gain variation is made in the software logic of the pitch angle controller. The gain level is changed depending upon the level of power error. The control uses low gain for low pitch activity the majority of the time. If the power exceeds ten percent offset above rated, the gain is increased to a higher gain to more effectively limit power. A variable gain control functioned well in tests on the Mod-0 wind turbine.
Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms
Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun
2011-01-01
This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927
Sex differences in the shoulder joint position sense acuity: a cross-sectional study.
Vafadar, Amir K; Côté, Julie N; Archambault, Philippe S
2015-09-30
Work-related musculoskeletal disorders (WMSD) is the most expensive form of work disability. Female sex has been considered as an individual risk factor for the development of WMSD, specifically in the neck and shoulder region. One of the factors that might contribute to the higher injury rate in women is possible differences in neuromuscular control. Accordingly the purpose of this study was to estimate the effect of sex on shoulder joint position sense acuity (as a part of shoulder neuromuscular control) in healthy individuals. Twenty-eight healthy participants, 14 females and 14 males were recruited for this study. To test position sense acuity, subjects were asked to flex their dominant shoulder to one of the three pre-defined angle ranges (low, mid and high-ranges) with eyes closed, hold their arm in that position for three seconds, go back to the starting position and then immediately replicate the same joint flexion angle, while the difference between the reproduced and original angle was taken as the measure of position sense error. The errors were measured using Vicon motion capture system. Subjects reproduced nine positions in total (3 ranges × 3 trials each). Calculation of absolute repositioning error (magnitude of error) showed no significant difference between men and women (p-value ≥ 0.05). However, the analysis of the direction of error (constant error) showed a significant difference between the sexes, as women tended to mostly overestimate the target, whereas men tended to both overestimate and underestimate the target (p-value ≤ 0.01, observed power = 0.79). The results also showed that men had a significantly more variable error, indicating more variability in their position sense, compared to women (p-value ≤ 0.05, observed power = 0.78). Differences observed in the constant JPS error suggest that men and women might use different neuromuscular control strategies in the upper limb. In addition, higher JPS variability observed in men might be one of the factors that could contribute to their lower rate of musculoskeletal disorders, compared to women. The result of this study showed that shoulder position sense, as part of the neuromuscular control system, differs between men and women. This finding can help us better understand the reasons behind the higher rate of musculoskeletal disorders in women, especially in the working environments.
Maurer, Willi; Jones, Byron; Chen, Ying
2018-05-10
In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al. Copyright © 2018 John Wiley & Sons, Ltd.
Nguyen, Hung P.; Dingwell, Jonathan B.
2012-01-01
Determining how the human nervous system contends with neuro-motor noise is vital to understanding how humans achieve accurate goal-directed movements. Experimentally, people learning skilled tasks tend to reduce variability in distal joint movements more than in proximal joint movements. This suggests that they might be imposing greater control over distal joints than proximal joints. However, the reasons for this remain unclear, largely because it is not experimentally possible to directly manipulate either the noise or the control at each joint independently. Therefore, this study used a 2 degree-of-freedom torque driven arm model to determine how different combinations of noise and/or control independently applied at each joint affected the reaching accuracy and the total work required to make the movement. Signal-dependent noise was simultaneously and independently added to the shoulder and elbow torques to induce endpoint errors during planar reaching. Feedback control was then applied, independently and jointly, at each joint to reduce endpoint error due to the added neuromuscular noise. Movement direction and the inertia distribution along the arm were varied to quantify how these biomechanical variations affected the system performance. Endpoint error and total net work were computed as dependent measures. When each joint was independently subjected to noise in the absence of control, endpoint errors were more sensitive to distal (elbow) noise than to proximal (shoulder) noise for nearly all combinations of reaching direction and inertia ratio. The effects of distal noise on endpoint errors were more pronounced when inertia was distributed more toward the forearm. In contrast, the total net work decreased as mass was shifted to the upper arm for reaching movements in all directions. When noise was present at both joints and joint control was implemented, controlling the distal joint alone reduced endpoint errors more than controlling the proximal joint alone for nearly all combinations of reaching direction and inertia ratio. Applying control only at the distal joint was more effective at reducing endpoint errors when more of the mass was more proximally distributed. Likewise, controlling the distal joint alone required less total net work than controlling the proximal joint alone for nearly all combinations of reaching distance and inertia ratio. It is more efficient to reduce endpoint error and energetic cost by selectively applying control to reduce variability in the distal joint than the proximal joint. The reasons for this arise from the biomechanical configuration of the arm itself. PMID:22757504
Nguyen, Hung P; Dingwell, Jonathan B
2012-06-01
Determining how the human nervous system contends with neuro-motor noise is vital to understanding how humans achieve accurate goal-directed movements. Experimentally, people learning skilled tasks tend to reduce variability in distal joint movements more than in proximal joint movements. This suggests that they might be imposing greater control over distal joints than proximal joints. However, the reasons for this remain unclear, largely because it is not experimentally possible to directly manipulate either the noise or the control at each joint independently. Therefore, this study used a 2 degree-of-freedom torque driven arm model to determine how different combinations of noise and/or control independently applied at each joint affected the reaching accuracy and the total work required to make the movement. Signal-dependent noise was simultaneously and independently added to the shoulder and elbow torques to induce endpoint errors during planar reaching. Feedback control was then applied, independently and jointly, at each joint to reduce endpoint error due to the added neuromuscular noise. Movement direction and the inertia distribution along the arm were varied to quantify how these biomechanical variations affected the system performance. Endpoint error and total net work were computed as dependent measures. When each joint was independently subjected to noise in the absence of control, endpoint errors were more sensitive to distal (elbow) noise than to proximal (shoulder) noise for nearly all combinations of reaching direction and inertia ratio. The effects of distal noise on endpoint errors were more pronounced when inertia was distributed more toward the forearm. In contrast, the total net work decreased as mass was shifted to the upper arm for reaching movements in all directions. When noise was present at both joints and joint control was implemented, controlling the distal joint alone reduced endpoint errors more than controlling the proximal joint alone for nearly all combinations of reaching direction and inertia ratio. Applying control only at the distal joint was more effective at reducing endpoint errors when more of the mass was more proximally distributed. Likewise, controlling the distal joint alone required less total net work than controlling the proximal joint alone for nearly all combinations of reaching distance and inertia ratio. It is more efficient to reduce endpoint error and energetic cost by selectively applying control to reduce variability in the distal joint than the proximal joint. The reasons for this arise from the biomechanical configuration of the arm itself.
Impact of Forecast and Model Error Correlations In 4dvar Data Assimilation
NASA Astrophysics Data System (ADS)
Zupanski, M.; Zupanski, D.; Vukicevic, T.; Greenwald, T.; Eis, K.; Vonder Haar, T.
A weak-constraint 4DVAR data assimilation system has been developed at Cooper- ative Institute for Research in the Atmosphere (CIRA), Colorado State University. It is based on the NCEP's ETA 4DVAR system, and it is fully parallel (MPI coding). The CIRA's 4DVAR system is aimed for satellite data assimilation research, with cur- rent focus on assimilation of cloudy radiances and microwave satellite measurements. Most important improvement over the previous 4DVAR system is a degree of gener- ality introduced into the new algorithm, namely for applications with different NWP models (e.g., RAMS, WRF, ETA, etc.), and for the choice of control variable. In cur- rent applications, the non-hydrostatic RAMS model and its adjoint are used, including all microphysical processess. The control variable includes potential temperature, ve- locity potential and stream function, vertical velocity, and seven mixing ratios with respect to all water phases. Since the statistics of the microphysical components of the control variable is not well known, a special attention will be paid to the impact of the forecast and model (prior) error correlations on the 4DVAR analysis. In particular, the sensitivity of the analysis with respect to decorrelation length will be examined. The prior error covariances are modelled using the compactly-supported, space-limited correlations developed at NASA DAO.
Proprioceptive deficit in patients with complete tearing of the anterior cruciate ligament.
Godinho, Pedro; Nicoliche, Eduardo; Cossich, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio
2014-01-01
To investigate the existence of proprioceptive deficits between the injured limb and the uninjured (i.e. contralateral normal) limb, in individuals who suffered complete tearing of the anterior cruciate ligament (ACL), using a strength reproduction test. Sixteen patients with complete tearing of the ACL participated in the study. A voluntary maximum isometric strength test was performed, with reproduction of the muscle strength in the limb with complete tearing of the ACL and the healthy contralateral limb, with the knee flexed at 60°. The meta-intensity was used for the procedure of 20% of the voluntary maximum isometric strength. The proprioceptive performance was determined by means of absolute error, variable error and constant error values. Significant differences were found between the control group and ACL group for the variables of absolute error (p = 0.05) and constant error (p = 0.01). No difference was found in relation to variable error (p = 0.83). Our data corroborate the hypothesis that there is a proprioceptive deficit in subjects with complete tearing of the ACL in an injured limb, in comparison with the uninjured limb, during evaluation of the sense of strength. This deficit can be explained in terms of partial or total loss of the mechanoreceptors of the ACL.
Candela, L.; Olea, R.A.; Custodio, E.
1988-01-01
Groundwater quality observation networks are examples of discontinuous sampling on variables presenting spatial continuity and highly skewed frequency distributions. Anywhere in the aquifer, lognormal kriging provides estimates of the variable being sampled and a standard error of the estimate. The average and the maximum standard error within the network can be used to dynamically improve the network sampling efficiency or find a design able to assure a given reliability level. The approach does not require the formulation of any physical model for the aquifer or any actual sampling of hypothetical configurations. A case study is presented using the network monitoring salty water intrusion into the Llobregat delta confined aquifer, Barcelona, Spain. The variable chloride concentration used to trace the intrusion exhibits sudden changes within short distances which make the standard error fairly invariable to changes in sampling pattern and to substantial fluctuations in the number of wells. ?? 1988.
Williams, Larry J; O'Boyle, Ernest H
2015-09-01
A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).
Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J
2007-03-28
A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury.
Gu, Xiaosi; Kirk, Ulrich; Lohrenz, Terry M; Montague, P Read
2014-08-01
Computational models of reward processing suggest that foregone or fictive outcomes serve as important information sources for learning and augment those generated by experienced rewards (e.g. reward prediction errors). An outstanding question is how these learning signals interact with top-down cognitive influences, such as cognitive reappraisal strategies. Using a sequential investment task and functional magnetic resonance imaging, we show that the reappraisal strategy selectively attenuates the influence of fictive, but not reward prediction error signals on investment behavior; such behavioral effect is accompanied by changes in neural activity and connectivity in the anterior insular cortex, a brain region thought to integrate subjective feelings with high-order cognition. Furthermore, individuals differ in the extent to which their behaviors are driven by fictive errors versus reward prediction errors, and the reappraisal strategy interacts with such individual differences; a finding also accompanied by distinct underlying neural mechanisms. These findings suggest that the variable interaction of cognitive strategies with two important classes of computational learning signals (fictive, reward prediction error) represent one contributing substrate for the variable capacity of individuals to control their behavior based on foregone rewards. These findings also expose important possibilities for understanding the lack of control in addiction based on possibly foregone rewarding outcomes. Copyright © 2013 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.
Paul Sullins, D
2017-12-01
Because of classification errors reported by the National Center for Health Statistics, an estimated 42 % of the same-sex married partners in the sample for this study are misclassified different-sex married partners, thus calling into question findings regarding same-sex married parents. Including biological parentage as a control variable suppresses same-sex/different-sex differences, thus obscuring the data error. Parentage is not appropriate as a control because it correlates nearly perfectly (+.97, gamma) with the same-sex/different-sex distinction and is invariant for the category of joint biological parents.
Error response test system and method using test mask variable
NASA Technical Reports Server (NTRS)
Gender, Thomas K. (Inventor)
2006-01-01
An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.
Grane, Venke Arntsberg; Endestad, Tor; Pinto, Arnfrid Farbu; Solbakk, Anne-Kristin
2014-01-01
We investigated performance-derived measures of executive control, and their relationship with self- and informant reported executive functions in everyday life, in treatment-naive adults with newly diagnosed Attention Deficit Hyperactivity Disorder (ADHD; n = 36) and in healthy controls (n = 35). Sustained attentional control and response inhibition were examined with the Test of Variables of Attention (T.O.V.A.). Delayed responses, increased reaction time variability, and higher omission error rate to Go signals in ADHD patients relative to controls indicated fluctuating levels of attention in the patients. Furthermore, an increment in NoGo commission errors when Go stimuli increased relative to NoGo stimuli suggests reduced inhibition of task-irrelevant stimuli in conditions demanding frequent responding. The ADHD group reported significantly more cognitive and behavioral executive problems than the control group on the Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A). There were overall not strong associations between task performance and ratings of everyday executive function. However, for the ADHD group, T.O.V.A. omission errors predicted self-reported difficulties on the Organization of Materials scale, and commission errors predicted informant reported difficulties on the same scale. Although ADHD patients endorsed more symptoms of depression and anxiety on the Achenbach System of Empirically Based Assessment (ASEBA) than controls, ASEBA scores were not significantly associated with T.O.V.A. performance scores. Altogether, the results indicate multifaceted alteration of attentional control in adult ADHD, and accompanying subjective difficulties with several aspects of executive function in everyday living. The relationships between the two sets of data were modest, indicating that the measures represent non-redundant features of adult ADHD. PMID:25545156
Grane, Venke Arntsberg; Endestad, Tor; Pinto, Arnfrid Farbu; Solbakk, Anne-Kristin
2014-01-01
We investigated performance-derived measures of executive control, and their relationship with self- and informant reported executive functions in everyday life, in treatment-naive adults with newly diagnosed Attention Deficit Hyperactivity Disorder (ADHD; n = 36) and in healthy controls (n = 35). Sustained attentional control and response inhibition were examined with the Test of Variables of Attention (T.O.V.A.). Delayed responses, increased reaction time variability, and higher omission error rate to Go signals in ADHD patients relative to controls indicated fluctuating levels of attention in the patients. Furthermore, an increment in NoGo commission errors when Go stimuli increased relative to NoGo stimuli suggests reduced inhibition of task-irrelevant stimuli in conditions demanding frequent responding. The ADHD group reported significantly more cognitive and behavioral executive problems than the control group on the Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A). There were overall not strong associations between task performance and ratings of everyday executive function. However, for the ADHD group, T.O.V.A. omission errors predicted self-reported difficulties on the Organization of Materials scale, and commission errors predicted informant reported difficulties on the same scale. Although ADHD patients endorsed more symptoms of depression and anxiety on the Achenbach System of Empirically Based Assessment (ASEBA) than controls, ASEBA scores were not significantly associated with T.O.V.A. performance scores. Altogether, the results indicate multifaceted alteration of attentional control in adult ADHD, and accompanying subjective difficulties with several aspects of executive function in everyday living. The relationships between the two sets of data were modest, indicating that the measures represent non-redundant features of adult ADHD.
Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.
Limongi, Roberto; Silva, Angélica M
2016-11-01
The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.
Design of a self-adaptive fuzzy PID controller for piezoelectric ceramics micro-displacement system
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Zhong, Yuning; Xu, Zhongbao
2008-12-01
In order to improve control precision of the piezoelectric ceramics (PZT) micro-displacement system, a self-adaptive fuzzy Proportional Integration Differential (PID) controller is designed based on the traditional digital PID controller combining with fuzzy control. The arithmetic gives a fuzzy control rule table with the fuzzy control rule and fuzzy reasoning, through this table, the PID parameters can be adjusted online in real time control. Furthermore, the automatic selective control is achieved according to the change of the error. The controller combines the good dynamic capability of the fuzzy control and the high stable precision of the PID control, adopts the method of using fuzzy control and PID control in different segments of time. In the initial and middle stage of the transition process of system, that is, when the error is larger than the value, fuzzy control is used to adjust control variable. It makes full use of the fast response of the fuzzy control. And when the error is smaller than the value, the system is about to be in the steady state, PID control is adopted to eliminate static error. The problems of PZT existing in the field of precise positioning are overcome. The results of the experiments prove that the project is correct and practicable.
Hsieh, Shulan; Li, Tzu-Hsien; Tsai, Ling-Ling
2010-04-01
To examine whether monetary incentives attenuate the negative effects of sleep deprivation on cognitive performance in a flanker task that requires higher-level cognitive-control processes, including error monitoring. Twenty-four healthy adults aged 18 to 23 years were randomly divided into 2 subject groups: one received and the other did not receive monetary incentives for performance accuracy. Both subject groups performed a flanker task and underwent electroencephalographic recordings for event-related brain potentials after normal sleep and after 1 night of total sleep deprivation in a within-subject, counterbalanced, repeated-measures study design. Monetary incentives significantly enhanced the response accuracy and reaction time variability under both normal sleep and sleep-deprived conditions, and they reduced the effects of sleep deprivation on the subjective effort level, the amplitude of the error-related negativity (an error-related event-related potential component), and the latency of the P300 (an event-related potential variable related to attention processes). However, monetary incentives could not attenuate the effects of sleep deprivation on any measures of behavior performance, such as the response accuracy, reaction time variability, or posterror accuracy adjustments; nor could they reduce the effects of sleep deprivation on the amplitude of the Pe, another error-related event-related potential component. This study shows that motivation incentives selectively reduce the effects of total sleep deprivation on some brain activities, but they cannot attenuate the effects of sleep deprivation on performance decrements in tasks that require high-level cognitive-control processes. Thus, monetary incentives and sleep deprivation may act through both common and different mechanisms to affect cognitive performance.
Accounting for measurement error: a critical but often overlooked process.
Harris, Edward F; Smith, Richard N
2009-12-01
Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
Development of the functional simulator for the Galileo attitude and articulation control system
NASA Technical Reports Server (NTRS)
Namiri, M. K.
1983-01-01
A simulation program for verifying and checking the performance of the Galileo Spacecraft's Attitude and Articulation Control Subsystem's (AACS) flight software is discussed. The program, which is called Functional Simulator (FUNSIM), provides a simple method of interfacing user-supplied mathematical models coded in FORTRAN which describes spacecraft dynamics, sensors, and actuators; this is done with the AACS flight software, coded in HAL/S (High-level Advanced Language/Shuttle). It is thus able to simulate the AACS flight software accurately to the HAL/S statement level in the environment of a mainframe computer system. FUNSIM also has a command and data subsystem (CDS) simulator. It is noted that the input/output data and timing are simulated with the same precision as the flight microprocessor. FUNSIM uses a variable stepsize numerical integration algorithm complete with individual error bound control on the state variable to solve the equations of motion. The program has been designed to provide both line printer and matrix dot plotting of the variables requested in the run section and to provide error diagnostics.
Hens, Koen; Berth, Mario; Armbruster, Dave; Westgard, Sten
2014-07-01
Six Sigma metrics were used to assess the analytical quality of automated clinical chemistry and immunoassay tests in a large Belgian clinical laboratory and to explore the importance of the source used for estimation of the allowable total error. Clinical laboratories are continually challenged to maintain analytical quality. However, it is difficult to measure assay quality objectively and quantitatively. The Sigma metric is a single number that estimates quality based on the traditional parameters used in the clinical laboratory: allowable total error (TEa), precision and bias. In this study, Sigma metrics were calculated for 41 clinical chemistry assays for serum and urine on five ARCHITECT c16000 chemistry analyzers. Controls at two analyte concentrations were tested and Sigma metrics were calculated using three different TEa targets (Ricos biological variability, CLIA, and RiliBÄK). Sigma metrics varied with analyte concentration, the TEa target, and between/among analyzers. Sigma values identified those assays that are analytically robust and require minimal quality control rules and those that exhibit more variability and require more complex rules. The analyzer to analyzer variability was assessed on the basis of Sigma metrics. Six Sigma is a more efficient way to control quality, but the lack of TEa targets for many analytes and the sometimes inconsistent TEa targets from different sources are important variables for the interpretation and the application of Sigma metrics in a routine clinical laboratory. Sigma metrics are a valuable means of comparing the analytical quality of two or more analyzers to ensure the comparability of patient test results.
Sliding mode control for Mars entry based on extended state observer
NASA Astrophysics Data System (ADS)
Lu, Kunfeng; Xia, Yuanqing; Shen, Ganghui; Yu, Chunmei; Zhou, Liuyu; Zhang, Lijun
2017-11-01
This paper addresses high-precision Mars entry guidance and control approach via sliding mode control (SMC) and Extended State Observer (ESO). First, differential flatness (DF) approach is applied to the dynamic equations of the entry vehicle to represent the state variables more conveniently. Then, the presented SMC law can guarantee the property of finite-time convergence of tracking error, which requires no information on high uncertainties that are estimated by ESO, and the rigorous proof of tracking error convergence is given. Finally, Monte Carlo simulation results are presented to demonstrate the effectiveness of the suggested approach.
A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.
ERIC Educational Resources Information Center
Kraemer, Helena Chmura; Thiemann, Sue
1989-01-01
Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…
Time-division multiplexer uses digital gates
NASA Technical Reports Server (NTRS)
Myers, C. E.; Vreeland, A. E.
1977-01-01
Device eliminates errors caused by analog gates in multiplexing a large number of channels at high frequency. System was designed for use in aerospace work to multiplex signals for monitoring such variables as fuel consumption, pressure, temperature, strain, and stress. Circuit may be useful in monitoring variables in process control and medicine as well.
Lexical and phonological variability in preschool children with speech sound disorder.
Macrae, Toby; Tyler, Ann A; Lewis, Kerry E
2014-02-01
The authors of this study examined relationships between measures of word and speech error variability and between these and other speech and language measures in preschool children with speech sound disorder (SSD). In this correlational study, 18 preschool children with SSD, age-appropriate receptive vocabulary, and normal oral motor functioning and hearing were assessed across 2 sessions. Experimental measures included word and speech error variability, receptive vocabulary, nonword repetition (NWR), and expressive language. Pearson product–moment correlation coefficients were calculated among the experimental measures. The correlation between word and speech error variability was slight and nonsignificant. The correlation between word variability and receptive vocabulary was moderate and negative, although nonsignificant. High word variability was associated with small receptive vocabularies. The correlations between speech error variability and NWR and between speech error variability and the mean length of children's utterances were moderate and negative, although both were nonsignificant. High speech error variability was associated with poor NWR and language scores. High word variability may reflect unstable lexical representations, whereas high speech error variability may reflect indistinct phonological representations. Preschool children with SSD who show abnormally high levels of different types of speech variability may require slightly different approaches to intervention.
Floré, Katelijne M J; Fiers, Tom; Delanghe, Joris R
2008-01-01
In recent years a number of point of care testing (POCT) glucometers were introduced on the market. We investigated the analytical variability (lot-to-lot variation, calibration error, inter-instrument and inter-operator variability) of glucose POCT systems in a university hospital environment and compared these results with the analytical needs required for tight glucose monitoring. The reference hexokinase method was compared to different POCT systems based on glucose oxidase (blood gas instruments) or glucose dehydrogenase (handheld glucometers). Based upon daily internal quality control data, total errors were calculated for the various glucose methods and the analytical variability of the glucometers was estimated. The total error of the glucometers exceeded by far the desirable analytical specifications (based on a biological variability model). Lot-to-lot variation, inter-instrument variation and inter-operator variability contributed approximately equally to total variance. As in a hospital environment, distribution of hematocrit values is broad, converting blood glucose into plasma values using a fixed factor further increases variance. The percentage of outliers exceeded the ISO 15197 criteria in a broad glucose concentration range. Total analytical variation of handheld glucometers is larger than expected. Clinicians should be aware that the variability of glucose measurements obtained by blood gas instruments is lower than results obtained with handheld glucometers on capillary blood.
A Comparison of Exposure Control Procedures in CATS Using the GPC Model
ERIC Educational Resources Information Center
Leroux, Audrey J.; Dodd, Barbara G.
2016-01-01
The current study compares the progressive-restricted standard error (PR-SE) exposure control method with the Sympson-Hetter, randomesque, and no exposure control (maximum information) procedures using the generalized partial credit model with fixed- and variable-length CATs and two item pools. The PR-SE method administered the entire item pool…
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Safety margins in older adults increase with improved control of a dynamic object
Hasson, Christopher J.; Sternad, Dagmar
2014-01-01
Older adults face decreasing motor capabilities due to pervasive neuromuscular degradations. As a consequence, errors in movement control increase. Thus, older individuals should maintain larger safety margins than younger adults. While this has been shown for object manipulation tasks, several reports on whole-body activities, such as posture and locomotion, demonstrate age-related reductions in safety margins. This is despite increased costs for control errors, such as a fall. We posit that this paradox could be explained by the dynamic challenge presented by the body or also an external object, and that age-related reductions in safety margins are in part due to a decreased ability to control dynamics. To test this conjecture we used a virtual ball-in-cup task that had challenging dynamics, yet afforded an explicit rendering of the physics and safety margin. The hypotheses were: (1) When manipulating an object with challenging dynamics, older adults have smaller safety margins than younger adults. (2) Older adults increase their safety margins with practice. Nine young and 10 healthy older adults practiced moving the virtual ball-in-cup to a target location in exactly 2 s. The accuracy and precision of the timing error quantified skill, and the ball energy relative to an escape threshold quantified the safety margin. Compared to the young adults, older adults had increased timing errors, greater variability, and decreased safety margins. With practice, both young and older adults improved their ability to control the object with decreased timing errors and variability, and increased their safety margins. These results suggest that safety margins are related to the ability to control dynamics, and may explain why in tasks with simple dynamics older adults use adequate safety margins, but in more complex tasks, safety margins may be inadequate. Further, the results indicate that task-specific training may improve safety margins in older adults. PMID:25071566
Estimation of the probability of success in petroleum exploration
Davis, J.C.
1977-01-01
A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum Publishing Corp.
NASA Astrophysics Data System (ADS)
Kim, Shin-Woo; Noh, Nam-Kyu; Lim, Gyu-Ho
2013-04-01
This study presents the introduction of retrospective optimal interpolation (ROI) and its application with Weather Research and Forecasting model (WRF). Song et al. (2009) suggested ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. The assimilation window of ROI algorithm is gradually increased, similar with that of the quasi-static variational assimilation (QSVA; Pires et al., 1996). Unlike QSVA method, however, ROI method assimilates the data at post analysis time using perturbation method (Verlaan and Heemink, 1997) without adjoint model. Song and Lim (2011) improved this method by incorporating eigen-decomposition and covariance inflation. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance which can concentrate ROI analyses on the error variances of governing eigenmodes by transforming the control variables into eigenspace. A total energy norm is used for the normalization of each control variables. In this study, ROI method is applied to WRF model with Observing System Simulation Experiment (OSSE) to validate the algorithm and to investigate the capability. Horizontal wind, pressure, potential temperature, and water vapor mixing ratio are used for control variables and observations. Firstly, 1-profile assimilation experiment is performed. Subsequently, OSSE's are performed using the virtual observing system which consists of synop, ship, and sonde data. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error with the assimilation by ROI. The characteristics and strength/weakness of ROI method are also investigated by conducting the experiments with 3D-Var (3-dimensional variational) method and 4D-Var (4-dimensional variational) method. In the initial time, ROI produces a larger forecast error than that of 4D-Var. However, the difference between the two experimental results is decreased gradually with time, and the ROI shows apparently better result (i.e., smaller forecast error) than that of 4D-Var after 9-hour forecast.
Age-Related Changes in Bimanual Instrument Playing with Rhythmic Cueing
Kim, Soo Ji; Cho, Sung-Rae; Yoo, Ga Eul
2017-01-01
Deficits in bimanual coordination of older adults have been demonstrated to significantly limit their functioning in daily life. As a bimanual sensorimotor task, instrument playing has great potential for motor and cognitive training in advanced age. While the process of matching a person’s repetitive movements to auditory rhythmic cueing during instrument playing was documented to involve motor and attentional control, investigation into whether the level of cognitive functioning influences the ability to rhythmically coordinate movement to an external beat in older populations is relatively limited. Therefore, the current study aimed to examine how timing accuracy during bimanual instrument playing with rhythmic cueing differed depending on the degree of participants’ cognitive aging. Twenty one young adults, 20 healthy older adults, and 17 older adults with mild dementia participated in this study. Each participant tapped an electronic drum in time to the rhythmic cueing provided using both hands simultaneously and in alternation. During bimanual instrument playing with rhythmic cueing, mean and variability of synchronization errors were measured and compared across the groups and the tempo of cueing during each type of tapping task. Correlations of such timing parameters with cognitive measures were also analyzed. The results showed that the group factor resulted in significant differences in the synchronization errors-related parameters. During bimanual tapping tasks, cognitive decline resulted in differences in synchronization errors between younger adults and older adults with mild dimentia. Also, in terms of variability of synchronization errors, younger adults showed significant differences in maintaining timing performance from older adults with and without mild dementia, which may be attributed to decreased processing time for bimanual coordination due to aging. Significant correlations were observed between variability of synchronization errors and performance of cognitive tasks involving executive control and cognitive flexibility when asked for bimanual coordination in response to external timing cues at adjusted tempi. Also, significant correlations with cognitive measures were more prevalent in variability of synchronization errors during alternative tapping compared to simultaneous tapping. The current study supports that bimanual tapping may be predictive of cognitive processing of older adults. Also, tempo and type of movement required for instrument playing both involve cognitive and motor loads at different levels, and such variables could be important factors for determining the complexity of the task and the involved task requirements for interventions using instrument playing. PMID:29085309
Analysis of turbojet-engine controls for afterburning starting
NASA Technical Reports Server (NTRS)
Phillips, W E , Jr
1956-01-01
A simulation procedure is developed for studying the effects of an afterburner start on a controlled turbojet engine. The afterburner start is represented by introducing a step decrease in the effective exhaust-nozzle area, after which the control returns the controlled engine variables to their initial values. The degree and speed with which the control acts are a measure of the effectiveness of the particular control system. Data are presented from five systems investigated using an electronic analog computer and the developed simulation procedure. These systems are compared with respect to steady-state errors, speed of response, and transient deviations of the system variables.
NASA Astrophysics Data System (ADS)
Burman, Erik; Hansbo, Peter; Larson, Mats G.
2018-03-01
Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.
Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J
2007-01-01
Background A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Methods Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. Results We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. Conclusion The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury. PMID:17391527
Refractive Errors in Patients with Migraine Headache.
Gunes, Alime; Demirci, Seden; Tok, Levent; Tok, Ozlem; Koyuncuoglu, Hasan; Yurekli, Vedat Ali
2016-01-01
To evaluate refractive errors in patients with migraine headache and to compare with healthy subjects. This prospective case-control study includes patients with migraine and age- and sex-matched healthy subjects. Clinical and demographic characteristics of the patients were noted. Detailed ophthalmological examinations were performed containing spherical refractive error, astigmatic refractive error, spherical equivalent (SE), anisometropia, best-corrected visual acuity, intraocular pressure, slit lamp biomicroscopy, fundus examination, axial length, anterior chamber depth, and central corneal thickness. Spectacle use in migraine and control groups was compared. Also, the relationship between refractive components and migraine headache variables was investigated. Seventy-seven migraine patients with mean age of 33.27 ± 8.84 years and 71 healthy subjects with mean age of 31.15 ± 10.45 years were enrolled (p = 0.18). The migraine patients had higher degrees of astigmatic refractive error, SE, and anisometropia when compared with the control subjects (p = 0.01, p = 0.03, p = 0.02, respectively). Migraine patients may have higher degrees of astigmatism, SE, and anisometropia. Therefore, they should have ophthalmological examinations regularly to ensure that their refractive errors are appropriately corrected.
ERIC Educational Resources Information Center
Uebel, Henrik; Albrecht, Bjorn; Asherson, Philip; Borger, Norbert A.; Butler, Louise; Chen, Wai; Christiansen, Hanna; Heise, Alexander; Kuntsi, Jonna; Schafer, Ulrike; Andreou, Penny; Manor, Iris; Marco, Rafaela; Miranda, Ana; Mulligan, Aisling; Oades, Robert D.; van der Meere, Jaap; Faraone, Stephen V.; Rothenberger, Aribert; Banaschewski, Tobias
2010-01-01
Background: Attention-deficit hyperactivity disorder (ADHD) is one of the most common and highly heritable child psychiatric disorders. There is strong evidence that children with ADHD show slower and more variable responses in tasks such as Go/Nogo tapping aspects of executive functions like sustained attention and response control which may be…
Szekér, Szabolcs; Vathy-Fogarassy, Ágnes
2018-01-01
Logistic regression based propensity score matching is a widely used method in case-control studies to select the individuals of the control group. This method creates a suitable control group if all factors affecting the output variable are known. However, if relevant latent variables exist as well, which are not taken into account during the calculations, the quality of the control group is uncertain. In this paper, we present a statistics-based research in which we try to determine the relationship between the accuracy of the logistic regression model and the uncertainty of the dependent variable of the control group defined by propensity score matching. Our analyses show that there is a linear correlation between the fit of the logistic regression model and the uncertainty of the output variable. In certain cases, a latent binary explanatory variable can result in a relative error of up to 70% in the prediction of the outcome variable. The observed phenomenon calls the attention of analysts to an important point, which must be taken into account when deducting conclusions.
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
Synthesis of hover autopilots for rotary-wing VTOL aircraft
NASA Technical Reports Server (NTRS)
Hall, W. E.; Bryson, A. E., Jr.
1972-01-01
The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less
A Divided Attention Experiment with Pervasively Hyperactive Children.
ERIC Educational Resources Information Center
van der Meere, Jaap; Sergeant, Joseph
1987-01-01
Task performance of 12 pervasive hyperactives and controls (ages 8-13) was studied in a divided attention reaction time experiment. Hyperactives were slower than controls, had more variable reaction times, and made more frequent errors. Task inefficiency in hyperactives could not be explained by a divided attention deficiency or impulsive…
Feedback attitude sliding mode regulation control of spacecraft using arm motion
NASA Astrophysics Data System (ADS)
Shi, Ye; Liang, Bin; Xu, Dong; Wang, Xueqian; Xu, Wenfu
2013-09-01
The problem of spacecraft attitude regulation based on the reaction of arm motion has attracted extensive attentions from both engineering and academic fields. Most of the solutions of the manipulator’s motion tracking problem just achieve asymptotical stabilization performance, so that these controllers cannot realize precise attitude regulation because of the existence of non-holonomic constraints. Thus, sliding mode control algorithms are adopted to stabilize the tracking error with zero transient process. Due to the switching effects of the variable structure controller, once the tracking error reaches the designed hyper-plane, it will be restricted to this plane permanently even with the existence of external disturbances. Thus, precise attitude regulation can be achieved. Furthermore, taking the non-zero initial tracking errors and chattering phenomenon into consideration, saturation functions are used to replace sign functions to smooth the control torques. The relations between the upper bounds of tracking errors and the controller parameters are derived to reveal physical characteristic of the controller. Mathematical models of free-floating space manipulator are established and simulations are conducted in the end. The results show that the spacecraft’s attitude can be regulated to the position as desired by using the proposed algorithm, the steady state error is 0.000 2 rad. In addition, the joint tracking trajectory is smooth, the joint tracking errors converges to zero quickly with a satisfactory continuous joint control input. The proposed research provides a feasible solution for spacecraft attitude regulation by using arm motion, and improves the precision of the spacecraft attitude regulation.
Metin, Baris; Roeyers, Herbert; Wiersema, Jan R; van der Meere, Jaap; Sonuga-Barke, Edmund
2012-12-15
According to the state regulation deficit model, event rate (ER) is an important determinant of performance of children with attention-deficit/hyperactivity disorder (ADHD). Fast ER is predicted to create overactivation and produce errors of commission, whereas slow ER is thought to create underactivation marked by slow and variable reaction times (RT) and errors of omission. To test these predictions, we conducted a systematic search of the literature to identify all reports of comparisons of ADHD and control individuals' performance on Go/No-Go tasks published between 2000 and 2011. In one analysis, we included all trials with at least two event rates and calculated the difference between ER conditions. In a second analysis, we used metaregression to test for the moderating role of ER on ADHD versus control differences seen across Go/No-Go studies. There was a significant and disproportionate slowing of reaction time in ADHD relative to controls on trials with slow event rates in both meta-analyses. For commission errors, the effect sizes were larger on trials with fast event rates. No ER effects were seen for RT variability. There were also general effects of ADHD on performance for all variables that persisted after effects of ER were taken into account. The results provide support for the state regulation deficit model of ADHD by showing the differential effects of fast and slow ER. The lack of an effect of ER on RT variability suggests that this behavioral characteristic may not be a marker of cognitive energetic effects in ADHD. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Compensated gain control circuit for buck regulator command charge circuit
Barrett, David M.
1996-01-01
A buck regulator command charge circuit includes a compensated-gain control signal for compensating for changes in the component values in order to achieve optimal voltage regulation. The compensated-gain control circuit includes an automatic-gain control circuit for generating a variable-gain control signal. The automatic-gain control circuit is formed of a precision rectifier circuit, a filter network, an error amplifier, and an integrator circuit.
Compensated gain control circuit for buck regulator command charge circuit
Barrett, D.M.
1996-11-05
A buck regulator command charge circuit includes a compensated-gain control signal for compensating for changes in the component values in order to achieve optimal voltage regulation. The compensated-gain control circuit includes an automatic-gain control circuit for generating a variable-gain control signal. The automatic-gain control circuit is formed of a precision rectifier circuit, a filter network, an error amplifier, and an integrator circuit. 5 figs.
Feedforward control strategies of subjects with transradial amputation in planar reaching.
Metzger, Anthony J; Dromerick, Alexander W; Schabowsky, Christopher N; Holley, Rahsaan J; Monroe, Brian; Lum, Peter S
2010-01-01
The rate of upper-limb amputations is increasing, and the rejection rate of prosthetic devices remains high. People with upper-limb amputation do not fully incorporate prosthetic devices into their activities of daily living. By understanding the reaching behaviors of prosthesis users, researchers can alter prosthetic devices and develop training protocols to improve the acceptance of prosthetic limbs. By observing the reaching characteristics of the nondisabled arms of people with amputation, we can begin to understand how the brain alters its motor commands after amputation. We asked subjects to perform rapid reaching movements to two targets with and without visual feedback. Subjects performed the tasks with both their prosthetic and nondisabled arms. We calculated endpoint error, trajectory error, and variability and compared them with those of nondisabled control subjects. We found no significant abnormalities in the prosthetic limb. However, we found an abnormal leftward trajectory error (in right arms) in the nondisabled arm of prosthetic users in the vision condition. In the no-vision condition, the nondisabled arm displayed abnormal leftward endpoint errors and abnormally higher endpoint variability. In the vision condition, peak velocity was lower and movement duration was longer in both arms of subjects with amputation. These abnormalities may reflect the cortical reorganization associated with limb loss.
Pranata, Adrian; Perraton, Luke; El-Ansary, Doa; Clark, Ross; Fortin, Karine; Dettmann, Tim; Brandham, Robert; Bryant, Adam
2017-07-01
The ability to control lumbar extensor force output is necessary for daily activities. However, it is unknown whether this ability is impaired in chronic low back pain patients. Similarly, it is unknown whether lumbar extensor force control is related to the disability levels of chronic low back pain patients. Thirty-three chronic low back pain and 20 healthy people performed lumbar extension force-matching task where they increased and decreased their force output to match a variable target force within 20%-50% maximal voluntary isometric contraction. Force control was quantified as the root-mean-square-error between participants' force output and target force across the entire, during the increasing and decreasing portions of the force curve. Within- and between-group differences in force-matching error and the relationship between back pain group's force-matching results and their Oswestry Disability Index scores were assessed using ANCOVA and linear regression respectively. Back pain group demonstrated more overall force-matching error (mean difference=1.60 [0.78, 2.43], P<0.01) and more force-matching error while increasing force output (mean difference=2.19 [1.01, 3.37], P<0.01) than control group. The back pain group demonstrated more force-matching error while increasing than decreasing force output (mean difference=1.74, P<0.001, 95%CI [0.87, 2.61]). A unit increase in force-matching error while decreasing force output is associated with a 47% increase in Oswestry score in back pain group (R 2 =0.19, P=0.006). Lumbar extensor muscle force control is compromised in chronic low back pain patients. Force-matching error predicts disability, confirming the validity of our force control protocol for chronic low back pain patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.
2014-01-01
Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.
NASA Astrophysics Data System (ADS)
Chen, Syuan-Yi; Gong, Sheng-Sian
2017-09-01
This study aims to develop an adaptive high-precision control system for controlling the speed of a vane-type air motor (VAM) pneumatic servo system. In practice, the rotor speed of a VAM depends on the input mass air flow, which can be controlled by the effective orifice area (EOA) of an electronic throttle valve (ETV). As the control variable of a second-order pneumatic system is the integral of the EOA, an observation-based adaptive dynamic sliding-mode control (ADSMC) system is proposed to derive the differential of the control variable, namely, the EOA control signal. In the ADSMC system, a proportional-integral-derivative fuzzy neural network (PIDFNN) observer is used to achieve an ideal dynamic sliding-mode control (DSMC), and a supervisor compensator is designed to eliminate the approximation error. As a result, the ADSMC incorporates the robustness of a DSMC and the online learning ability of a PIDFNN. To ensure the convergence of the tracking error, a Lyapunov-based analytical method is employed to obtain the adaptive algorithms required to tune the control parameters of the online ADSMC system. Finally, our experimental results demonstrate the precision and robustness of the ADSMC system for highly nonlinear and time-varying VAM pneumatic servo systems.
1996-2007 Interannual Spatio-Temporal Variability in Snowmelt in Two Montane Watersheds
NASA Astrophysics Data System (ADS)
Jepsen, S. M.; Molotch, N. P.; Rittger, K. E.
2009-12-01
Snowmelt is a primary water source for ecosystems within, and urban/agricultural centers near, mountain regions. Stream chemistry from montane catchments is controlled by the flowpaths of water from snowmelt and the timing and duration of snow coverage. A process level understanding of the variability in these processes requires an understanding of the effect of changing climate and anthropogenic loading on spatio-temporal snowmelt patterns. With this as our objective, we are applying a snow reconstruction model to two well-studied montane watersheds, Tokopah Basin (TOK), California and Green Lakes Valley (GLV), Colorado, to examine interannual variability in the timing and location of snowmelt in response to variable climate conditions during the period from 1996 to 2007. The reconstruction model back solves for snowmelt by combining surface energy fluxes, inferred from meteorological data, with sequences of melt season snow images derived from satellite data (i.e., snowmelt depletion curves). Preliminary model results for 2002 were tested against measured snow water equivalent (SWE) and hydrograph data for the two watersheds. The computed maximum SWE averaged over TOK and GLV were 94 cm (~+17% error) and 50.2 cm (~+1% error), respectively. We present an analysis of interannual variability in these errors, in addition to reconstructed snowmelt maps over different land cover types under changing climate conditions between 1996-2007, focusing on the variability with interannual variation in climate.
Variable Structure PID Control to Prevent Integrator Windup
NASA Technical Reports Server (NTRS)
Hall, C. E.; Hodel, A. S.; Hung, J. Y.
1999-01-01
PID controllers are frequently used to control systems requiring zero steady-state error while maintaining requirements for settling time and robustness (gain/phase margins). PID controllers suffer significant loss of performance due to short-term integrator wind-up when used in systems with actuator saturation. We examine several existing and proposed methods for the prevention of integrator wind-up in both continuous and discrete time implementations.
Goode, C; LeRoy, J; Allen, D G
2007-01-01
This study reports on a multivariate analysis of the moving bed biofilm reactor (MBBR) wastewater treatment system at a Canadian pulp mill. The modelling approach involved a data overview by principal component analysis (PCA) followed by partial least squares (PLS) modelling with the objective of explaining and predicting changes in the BOD output of the reactor. Over two years of data with 87 process measurements were used to build the models. Variables were collected from the MBBR control scheme as well as upstream in the bleach plant and in digestion. To account for process dynamics, a variable lagging approach was used for variables with significant temporal correlations. It was found that wood type pulped at the mill was a significant variable governing reactor performance. Other important variables included flow parameters, faults in the temperature or pH control of the reactor, and some potential indirect indicators of biomass activity (residual nitrogen and pH out). The most predictive model was found to have an RMSEP value of 606 kgBOD/d, representing a 14.5% average error. This was a good fit, given the measurement error of the BOD test. Overall, the statistical approach was effective in describing and predicting MBBR treatment performance.
Sample sizes to control error estimates in determining soil bulk density in California forest soils
Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber
2016-01-01
Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...
The Relationship between Intelligence and Performance on the Test of Variables of Attention (TOVA).
ERIC Educational Resources Information Center
Weyandt, Lisa L.; Mitzlaff, Linda; Thomas, Laura
2002-01-01
This study, with 17 young adults with attention deficit hyperactivity disorder (ADHD) and 62 without ADHD, found no significant correlations between full scale IQ and scores on the Test of Variables of Attention (TOVA). However, analysis of variance revealed that subjects with ADHD made more errors of omission on the TOVA than did controls.…
On the development of voluntary and reflexive components in human saccade generation.
Fischer, B; Biscaldi, M; Gezeck, S
1997-04-18
The saccadic performance of a large number (n = 281) of subjects of different ages (8-70 years) was studied applying two saccade tasks: the prosaccade overlap (PO) task and the antisaccade gap (AG) task. From the PO task, the mean reaction times and the percentage of express saccades were determined for each subject. From the AG task, the mean reaction time of the correct antisaccades and of the erratic prosaccades were measured. In addition, we determined the error rate and the mean correction time, i.e. the time between the end of the first erratic prosaccade and the following corrective antisaccade. These variables were measured separately for stimuli presented (in random order) at the right or left side. While strong correlations were seen between variables for the right and left sides, considerable side asymmetries were obtained from many subjects. A factor analysis revealed that the seven variables (six eye movement variables plus age) were mainly determined by only two factors, V and F. The V factor was dominated by the variables from the AG task (reaction time, correction time, error rate) the F factor by variables from the PO task (reaction time, percentage express saccades) and the reaction time of the errors (prosaccades!) from the AG task. The relationship between the percentage number of express saccades and the percentage number of errors was completely asymmetric: high numbers of express saccades were accompanied by high numbers of errors but not vice versa. Only the variables in the V factor covaried with age. A fast decrease of the antisaccade reaction time (by 50 ms), of the correction times (by 70 ms) and of the error rate (from 60 to 22%) was observed between age 9 and 15 years, followed by a further period of slower decrease until age 25 years. The mean time a subject needed to reach the side opposite to the stimulus as required by the antisaccade task decreased from approximately 350 to 250 ms until age 15 years and decreased further by 20 ms before it increased again to approximately 280 ms. At higher ages, there was a slight indication for a return development. Subjects with high error rates had long antisaccade latencies and needed a long time to reach the opposite side on error trials. The variables obtained from the PO task varied also significantly with age but by smaller amounts. The results are discussed in relation to the subsystems controlling saccade generation: a voluntary and a reflex component the latter being suppressed by active fixation. Both systems seem to develop differentially. The data offer a detailed baseline for clinical studies using the pro- and antisaccade tasks as an indication of functional impairments, circumscribed brain lesions, neurological and psychiatric diseases and cognitive deficits.
A provisional effective evaluation when errors are present in independent variables
NASA Technical Reports Server (NTRS)
Gurin, L. S.
1983-01-01
Algorithms are examined for evaluating the parameters of a regression model when there are errors in the independent variables. The algorithms are fast and the estimates they yield are stable with respect to the correlation of errors and measurements of both the dependent variable and the independent variables.
Optimizing pattern recognition-based control for partial-hand prosthesis application.
Earley, Eric J; Adewuyi, Adenike A; Hargrove, Levi J
2014-01-01
Partial-hand amputees often retain good residual wrist motion, which is essential for functional activities involving use of the hand. Thus, a crucial design criterion for a myoelectric, partial-hand prosthesis control scheme is that it allows the user to retain residual wrist motion. Pattern recognition (PR) of electromyographic (EMG) signals is a well-studied method of controlling myoelectric prostheses. However, wrist motion degrades a PR system's ability to correctly predict hand-grasp patterns. We studied the effects of (1) window length and number of hand-grasps, (2) static and dynamic wrist motion, and (3) EMG muscle source on the ability of a PR-based control scheme to classify functional hand-grasp patterns. Our results show that training PR classifiers with both extrinsic and intrinsic muscle EMG yields a lower error rate than training with either group by itself (p<0.001); and that training in only variable wrist positions, with only dynamic wrist movements, or with both variable wrist positions and movements results in lower error rates than training in only the neutral wrist position (p<0.001). Finally, our results show that both an increase in window length and a decrease in the number of grasps available to the classifier significantly decrease classification error (p<0.001). These results remained consistent whether the classifier selected or maintained a hand-grasp.
Chen, Yi-Ching; Lin, Linda L; Lin, Yen-Ting; Hu, Chia-Ling; Hwang, Ing-Shiou
2017-01-01
Error amplification (EA) feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG). EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations [Formula: see text], short-term effective diffusion coefficients (Ds), and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13-35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization.
Chen, Yi-Ching; Lin, Linda L.; Lin, Yen-Ting; Hu, Chia-Ling; Hwang, Ing-Shiou
2017-01-01
Error amplification (EA) feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG). EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations <ΔFc2>, short-term effective diffusion coefficients (Ds), and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13–35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization. PMID:29167637
Interference elimination in digital controllers of automation systems of oil and gas complex
NASA Astrophysics Data System (ADS)
Solomentsev, K. Yu; Fugarov, D. D.; Purchina, O. A.; Poluyan, A. Y.; Nesterchuk, V. V.; Petrenkova, S. B.
2018-05-01
The given article considers the problems arising in the process of digital governors development for the systems of automatic control. In the case of interference, and also in case of high frequency of digitization, digital differentiation gives a big error. The problem is that the derivative is calculated as the difference of two close variables. The method of differentiation is offered to reduce this error, when there is a case of averaging the difference quotient of the series of meanings. The structure chart for the implementation of this differentiation method is offered in the case of governors construction.
Climatological Modeling of Monthly Air Temperature and Precipitation in Egypt through GIS Techniques
NASA Astrophysics Data System (ADS)
El Kenawy, A.
2009-09-01
This paper describes a method for modeling and mapping four climatic variables (maximum temperature, minimum temperature, mean temperature and total precipitation) in Egypt using a multiple regression approach implemented in a GIS environment. In this model, a set of variables including latitude, longitude, elevation within a distance of 5, 10 and 15 km, slope, aspect, distance to the Mediterranean Sea, distance to the Red Sea, distance to the Nile, ratio between land and water masses within a radius of 5, 10, 15 km, the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), the Normalized Difference Temperature Index (NDTI) and reflectance are included as independent variables. These variables were integrated as raster layers in MiraMon software at a spatial resolution of 1 km. Climatic variables were considered as dependent variables and averaged from quality controlled and homogenized 39 series distributing across the entire country during the period of (1957-2006). For each climatic variable, digital and objective maps were finally obtained using the multiple regression coefficients at monthly, seasonal and annual timescale. The accuracy of these maps were assessed through cross-validation between predicted and observed values using a set of statistics including coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), mean bias Error (MBE) and D Willmott statistic. These maps are valuable in the sense of spatial resolution as well as the number of observatories involved in the current analysis.
Development of adaptive observation strategy using retrospective optimal interpolation
NASA Astrophysics Data System (ADS)
Noh, N.; Kim, S.; Song, H.; Lim, G.
2011-12-01
Retrospective optimal interpolation (ROI) is a method that is used to minimize cost functions with multiple minima without using adjoint models. Song and Lim (2011) perform the experiments to reduce the computational costs for implementing ROI by transforming the control variables into eigenvectors of background error covariance. We adapt the ROI algorithm to compute sensitivity estimates of severe weather events over the Korean peninsula. The eigenvectors of the ROI algorithm is modified every time the observations are assimilated. This implies that the modified eigenvectors shows the error distribution of control variables which are updated by assimilating observations. So, We can estimate the effects of the specific observations. In order to verify the adaptive observation strategy, High-impact weather over the Korean peninsula is simulated and interpreted using WRF modeling system and sensitive regions for each high-impact weather is calculated. The effects of assimilation for each observation type is discussed.
Pointing control using a moving base of support.
Hondzinski, Jan M; Kwon, Taegyong
2009-07-01
The purposes of this study were to determine whether gaze direction provides a control signal for movement direction for a pointing task requiring a step and to gain insight into why discrepancies previously identified in the literature for endpoint accuracy with gaze directed eccentrically exist. Straight arm pointing movements were performed to real and remembered target locations, either toward or 30 degrees eccentric to gaze direction. Pointing occurred in normal room lighting or darkness while subjects sat, stood still or side-stepped left or right. Trunk rotation contributed 22-65% to gaze orientations when it was not constrained. Error differences for different target locations explained discrepancies among previous experiments. Variable pointing errors were influenced by gaze direction, while mean systematic pointing errors and trunk orientations were influenced by step direction. These data support the use of a control strategy that relies on gaze direction and equilibrium inputs for whole-body goal-directed movements.
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Descarreaux, Martin; Mayrand, Nancy; Raymond, Jean
2007-01-01
A number of recent scientific publications suggest that patients suffering from whiplash-associated disorders (WADs) exhibit sensorimotor deficits in the control of head and neck movements. The main objective of the present study was to evaluate if subjects with WADs can produce isometric neck extension and flexion forces with precision, variability, and a mode of control similar to the values of healthy subjects. A control group study with repeated measures. Neck force production parameters and neuromuscular control were measured in 17 whiplash and 14 control subjects. The experimental group included subjects who had a history of persistent neck pain or disability after a motor vehicle accident. Pain levels were assessed on a standard 100-mm visual analog pain scale at the beginning and end of the experiment. Each whiplash subject completed the neck disability index and the short-form 36 health survey (SF-36) questionnaire before the experiment. All subjects were asked to exert flexion and extension forces against a fixed head harness. Kinetic variables included time to peak force, time to peak force variability, peak force variability, and absolute error in peak force. Surface electrodes were applied bilaterally over the sternocleidomastoideus and paraspinal muscles. Electromyography (EMG)-dependent variables included EMG burst duration and amplitude using numerical integrated techniques. The average time to peak force was significantly longer for whiplash subjects than for the healthy controls. A significant increase in peak force variability was also observed in the whiplash group, and no group differences were noted for absolute error. Heightened muscular activity was seen in both paraspinal muscles, even though it only reached statistical significance for the left paraspinal muscle. Our results show that the whiplash subjects involved in the study were able to produce isometric forces with spatial precision similar to healthy controls using a motor strategy in which the time to peak force is increased. This trade-off between spatial precision and time to peak force probably reflects an adaptation aimed at limiting pain and further injuries.
Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A
2015-03-15
The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.
Wonnemann, Meinolf; Frömke, Cornelia; Koch, Armin
2015-01-01
We investigated different evaluation strategies for bioequivalence trials with highly variable drugs on their resulting empirical type I error and empirical power. The classical 'unscaled' crossover design with average bioequivalence evaluation, the Add-on concept of the Japanese guideline, and the current 'scaling' approach of EMA were compared. Simulation studies were performed based on the assumption of a single dose drug administration while changing the underlying intra-individual variability. Inclusion of Add-on subjects following the Japanese concept led to slight increases of the empirical α-error (≈7.5%). For the approach of EMA we noted an unexpected tremendous increase of the rejection rate at a geometric mean ratio of 1.25. Moreover, we detected error rates slightly above the pre-set limit of 5% even at the proposed 'scaled' bioequivalence limits. With the classical 'unscaled' approach and the Japanese guideline concept the goal of reduced subject numbers in bioequivalence trials of HVDs cannot be achieved. On the other hand, widening the acceptance range comes at the price that quite a number of products will be accepted bioequivalent that had not been accepted in the past. A two-stage design with control of the global α therefore seems the better alternative.
Heavner, Karyn; Burstyn, Igor
2015-08-24
Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.
Suppressing relaxation in superconducting qubits by quasiparticle pumping.
Gustavsson, Simon; Yan, Fei; Catelani, Gianluigi; Bylander, Jonas; Kamal, Archana; Birenbaum, Jeffrey; Hover, David; Rosenberg, Danna; Samach, Gabriel; Sears, Adam P; Weber, Steven J; Yoder, Jonilyn L; Clarke, John; Kerman, Andrew J; Yoshihara, Fumiki; Nakamura, Yasunobu; Orlando, Terry P; Oliver, William D
2016-12-23
Dynamical error suppression techniques are commonly used to improve coherence in quantum systems. They reduce dephasing errors by applying control pulses designed to reverse erroneous coherent evolution driven by environmental noise. However, such methods cannot correct for irreversible processes such as energy relaxation. We investigate a complementary, stochastic approach to reducing errors: Instead of deterministically reversing the unwanted qubit evolution, we use control pulses to shape the noise environment dynamically. In the context of superconducting qubits, we implement a pumping sequence to reduce the number of unpaired electrons (quasiparticles) in close proximity to the device. A 70% reduction in the quasiparticle density results in a threefold enhancement in qubit relaxation times and a comparable reduction in coherence variability. Copyright © 2016, American Association for the Advancement of Science.
Furmanek, Mariusz P.; Słomka, Kajetan J.; Sobiesiak, Andrzej; Rzepko, Marian; Juras, Grzegorz
2018-01-01
Abstract The proprioceptive information received from mechanoreceptors is potentially responsible for controlling the joint position and force differentiation. However, it is unknown whether cryotherapy influences this complex mechanism. Previously reported results are not universally conclusive and sometimes even contradictory. The main objective of this study was to investigate the impact of local cryotherapy on knee joint position sense (JPS) and force production sense (FPS). The study group consisted of 55 healthy participants (age: 21 ± 2 years, body height: 171.2 ± 9 cm, body mass: 63.3 ± 12 kg, BMI: 21.5 ± 2.6). Local cooling was achieved with the use of gel-packs cooled to -2 ± 2.5°C and applied simultaneously over the knee joint and the quadriceps femoris muscle for 20 minutes. JPS and FPS were evaluated using the Biodex System 4 Pro apparatus. Repeated measures analysis of variance (ANOVA) did not show any statistically significant changes of the JPS and FPS under application of cryotherapy for all analyzed variables: the JPS’s absolute error (p = 0.976), its relative error (p = 0.295), and its variable error (p = 0.489); the FPS’s absolute error (p = 0.688), its relative error (p = 0.193), and its variable error (p = 0.123). The results indicate that local cooling does not affect proprioceptive acuity of the healthy knee joint. They also suggest that local limited cooling before physical activity at low velocity did not present health or injury risk in this particular study group. PMID:29599858
TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW
Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten
2012-01-01
Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869
Leach, Julia M; Mancini, Martina; Peterka, Robert J; Hayes, Tamara L; Horak, Fay B
2014-09-29
The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the "gold standard" laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2-6 mm (before calibration) to 0.5-2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from -10.5% (before calibration) to -0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable.
Leach, Julia M.; Mancini, Martina; Peterka, Robert J.; Hayes, Tamara L.; Horak, Fay B.
2014-01-01
The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the “gold standard” laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2–6 mm (before calibration) to 0.5–2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from −10.5% (before calibration) to −0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable. PMID:25268919
Integrator Windup Protection-Techniques and a STOVL Aircraft Engine Controller Application
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.; Narayanaswamy, S.
1997-01-01
Integrators are included in the feedback loop of a control system to eliminate the steady state errors in the commanded variables. The integrator windup problem arises if the control actuators encounter operational limits before the steady state errors are driven to zero by the integrator. The typical effects of windup are large system oscillations, high steady state error, and a delayed system response following the windup. In this study, methods to prevent the integrator windup are examined to provide Integrator Windup Protection (IW) for an engine controller of a Short Take-Off and Vertical Landing (STOVL) aircraft. An unified performance index is defined to optimize the performance of the Conventional Anti-Windup (CAW) and the Modified Anti-Windup (MAW) methods. A modified Genetic Algorithm search procedure with stochastic parameter encoding is implemented to obtain the optimal parameters of the CAW scheme. The advantages and drawbacks of the CAW and MAW techniques are discussed and recommendations are made for the choice of the IWP scheme, given some characteristics of the system.
Development of an adaptive hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1994-01-01
In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.
Chang, Wen-Pin; Davies, Patricia L; Gavin, William J
2010-10-01
Recent studies have investigated the relationship between psychological symptoms and personality traits and error monitoring measured by error-related negativity (ERN) and error positivity (Pe) event-related potential (ERP) components, yet there remains a paucity of studies examining the collective simultaneous effects of psychological symptoms and personality traits on error monitoring. This present study, therefore, examined whether measures of hyperactivity-impulsivity, depression, anxiety and antisocial personality characteristics could collectively account for significant interindividual variability of both ERN and Pe amplitudes, in 29 healthy adults with no known disorders, ages 18-30 years. The bivariate zero-order correlation analyses found that only the anxiety measure was significantly related to both ERN and Pe amplitudes. However, multiple regression analyses that included all four characteristic measures while controlling for number of segments in the ERP average revealed that both depression and antisocial personality characteristics were significant predictors for the ERN amplitudes whereas antisocial personality was the only significant predictor for the Pe amplitude. These findings suggest that psychological symptoms and personality traits are associated with individual variations in error monitoring in healthy adults, and future studies should consider these variables when comparing group difference in error monitoring between adults with and without disabilities. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Mismeasurement and the resonance of strong confounders: uncorrelated errors.
Marshall, J R; Hastrup, J L
1996-05-15
Greenland first documented (Am J Epidemiol 1980; 112:564-9) that error in the measurement of a confounder could resonate--that it could bias estimates of other study variables, and that the bias could persist even with statistical adjustment for the confounder as measured. An important question is raised by this finding: can such bias be more than trivial within the bounds of realistic data configurations? The authors examine several situations involving dichotomous and continuous data in which a confounder and a null variable are measured with error, and they assess the extent of resultant bias in estimates of the effect of the null variable. They show that, with continuous variables, measurement error amounting to 40% of observed variance in the confounder could cause the observed impact of the null study variable to appear to alter risk by as much as 30%. Similarly, they show, with dichotomous independent variables, that 15% measurement error in the form of misclassification could lead the null study variable to appear to alter risk by as much as 50%. Such bias would result only from strong confounding. Measurement error would obscure the evidence that strong confounding is a likely problem. These results support the need for every epidemiologic inquiry to include evaluations of measurement error in each variable considered.
Goodworth, Adam D; Paquette, Caroline; Jones, Geoffrey Melvill; Block, Edward W; Fletcher, William A; Hu, Bin; Horak, Fay B
2012-05-01
Linear and angular control of trunk and leg motion during curvilinear navigation was investigated in subjects with cerebellar ataxia and age-matched control subjects. Subjects walked with eyes open around a 1.2-m circle. The relationship of linear to angular motion was quantified by determining the ratios of trunk linear velocity to trunk angular velocity and foot linear position to foot angular position. Errors in walking radius (the ratio of linear to angular motion) also were quantified continuously during the circular walk. Relative variability of linear and angular measures was compared using coefficients of variation (CoV). Patterns of variability were compared using power spectral analysis for the trunk and auto-covariance analysis for the feet. Errors in radius were significantly increased in patients with cerebellar damage as compared to controls. Cerebellar subjects had significantly larger CoV of feet and trunk in angular, but not linear, motion. Control subjects also showed larger CoV in angular compared to linear motion of the feet and trunk. Angular and linear components of stepping differed in that angular, but not linear, foot placement had a negative correlation from one stride to the next. Thus, walking in a circle was associated with more, and a different type of, variability in angular compared to linear motion. Results are consistent with increased difficulty of, and role of the cerebellum in, control of angular trunk and foot motion for curvilinear locomotion.
Learning from adaptive neural dynamic surface control of strict-feedback systems.
Wang, Min; Wang, Cong
2015-06-01
Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.
Sources of variability and systematic error in mouse timing behavior.
Gallistel, C R; King, Adam; McDonald, Robert
2004-01-01
In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.
Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum.
Diederen, Kelly M J; Ziauddeen, Hisham; Vestergaard, Martin D; Spencer, Tom; Schultz, Wolfram; Fletcher, Paul C
2017-02-15
Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness. SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that are modulated by the brain chemical dopamine are sensitive to reward variability. Here, we aimed to directly relate dopamine to learning about variable rewards, and the neural encoding of associated teaching signals. We perturbed dopamine in healthy individuals using dopaminergic medication and asked them to predict variable rewards while we made brain scans. Dopamine perturbations impaired learning and the neural encoding of reward variability, thus establishing a direct link between dopamine and adaptation to reward variability. These results aid our understanding of clinical conditions associated with dopaminergic dysfunction, such as psychosis. Copyright © 2017 Diederen et al.
NASA Astrophysics Data System (ADS)
Vuorinen, I.; Hänninen, J.; Kornilovs, G.
2003-12-01
Time series of freshwater runoff, seawater salinity, temperature and oxygen were used in transfer functions (TF) to model changes of mesozooplankton taxa in the Baltic Sea from the 1960’s to the 1990’s. The models were then compared with long term zooplankton monitoring data from the same period. The TF models for all taxa over the whole Baltic proper and at different depth layers showed statistically significant estimates in t-tests. TF models were further compared using parsimony as a criterion. We present models showing 1) r2 > 0.4, 2) the smallest residual standard error with the combination of exploratory variables, 3) the lowest number of parameters and 4) the highest proportional decrease in error term when the TF model residual standard error was compared with those of the univariate ARIMA model of the same response variable. Most often (7 taxa out of a total of 8), zooplankton taxa were dependent on freshwater runoff and/or seawater salinity. Cladocerans and estuarine copepods were more conveniently modelled through the inclusion of seawater temperature and oxygen data as independent variables. Our modelling, however, explains neither the overall increase in zooplankton abundance nor a simultaneous decrease found in the neritic copepod, Temora longicornis. Therefore, biotic controlling agents (e.g. nutrients, primary production and planktivore diets) are suggested as independent variables for further TF modelling. TF modelling enabled us to put the controlling factors in a time frame. It was then possible, despite the inherent multiple correlation among parameters studied to deduce a chain-of-events from the environmental controls and biotic feedback mechanisms to changes in zooplankton species. We suggest that the documented long-term changes in zooplankton could have been driven by climatic regulation only. The control by climate could be mediated to zooplankton through marine chemical and physical factors, as well as biotic factors if all of these were responding to the same external control, such as changes in the freshwater runoff. Increased runoff would explain both the increasing eutrophication, causing the overall increase of zooplankton, and the changes in selective predation, contributing to decline of Temora.
Variable structure control of nonlinear systems through simplified uncertain models
NASA Technical Reports Server (NTRS)
Sira-Ramirez, Hebertt
1986-01-01
A variable structure control approach is presented for the robust stabilization of feedback equivalent nonlinear systems whose proposed model lies in the same structural orbit of a linear system in Brunovsky's canonical form. An attempt to linearize exactly the nonlinear plant on the basis of the feedback control law derived for the available model results in a nonlinearly perturbed canonical system for the expanded class of possible equivalent control functions. Conservatism tends to grow as modeling errors become larger. In order to preserve the internal controllability structure of the plant, it is proposed that model simplification be carried out on the open-loop-transformed system. As an example, a controller is developed for a single link manipulator with an elastic joint.
Spatio-temporal error growth in the multi-scale Lorenz'96 model
NASA Astrophysics Data System (ADS)
Herrera, S.; Fernández, J.; Rodríguez, M. A.; Gutiérrez, J. M.
2010-07-01
The influence of multiple spatio-temporal scales on the error growth and predictability of atmospheric flows is analyzed throughout the paper. To this aim, we consider the two-scale Lorenz'96 model and study the interplay of the slow and fast variables on the error growth dynamics. It is shown that when the coupling between slow and fast variables is weak the slow variables dominate the evolution of fluctuations whereas in the case of strong coupling the fast variables impose a non-trivial complex error growth pattern on the slow variables with two different regimes, before and after saturation of fast variables. This complex behavior is analyzed using the recently introduced Mean-Variance Logarithmic (MVL) diagram.
Error-related brain activity predicts cocaine use after treatment at 3-month follow-up.
Marhe, Reshmi; van de Wetering, Ben J M; Franken, Ingmar H A
2013-04-15
Relapse after treatment is one of the most important problems in drug dependency. Several studies suggest that lack of cognitive control is one of the causes of relapse. In this study, a relative new electrophysiologic index of cognitive control, the error-related negativity, is investigated to examine its suitability as a predictor of relapse. The error-related negativity was measured in 57 cocaine-dependent patients during their first week in detoxification treatment. Data from 49 participants were used to predict cocaine use at 3-month follow-up. Cocaine use at follow-up was measured by means of self-reported days of cocaine use in the last month verified by urine screening. A multiple hierarchical regression model was used to examine the predictive value of the error-related negativity while controlling for addiction severity and self-reported craving in the week before treatment. The error-related negativity was the only significant predictor in the model and added 7.4% of explained variance to the control variables, resulting in a total of 33.4% explained variance in the prediction of days of cocaine use at follow-up. A reduced error-related negativity measured during the first week of treatment was associated with more days of cocaine use at 3-month follow-up. Moreover, the error-related negativity was a stronger predictor of recent cocaine use than addiction severity and craving. These results suggest that underactive error-related brain activity might help to identify patients who are at risk of relapse as early as in the first week of detoxification treatment. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Speed-constrained three-axes attitude control using kinematic steering
NASA Astrophysics Data System (ADS)
Schaub, Hanspeter; Piggott, Scott
2018-06-01
Spacecraft attitude control solutions typically are torque-level algorithms that simultaneously control both the attitude and angular velocity tracking errors. In contrast, robotic control solutions are kinematic steering commands where rates are treated as the control variable, and a servo-tracking control subsystem is present to achieve the desired control rates. In this paper kinematic attitude steering controls are developed where an outer control loop establishes a desired angular response history to a tracking error, and an inner control loop tracks the commanded body angular rates. The overall stability relies on the separation principle of the inner and outer control loops which must have sufficiently different response time scales. The benefit is that the outer steering law response can be readily shaped to a desired behavior, such as limiting the approach angular velocity when a large tracking error is corrected. A Modified Rodrigues Parameters implementation is presented that smoothly saturates the speed response. A robust nonlinear body rate servo loop is developed which includes integral feedback. This approach provides a convenient modular framework that makes it simple to interchange outer and inner control loops to readily setup new control implementations. Numerical simulations illustrate the expected performance for an aggressive reorientation maneuver subject to an unknown external torque.
Benchmarking observational uncertainties for hydrology (Invited)
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.
2013-12-01
There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has become more common for hydrologists to use multiple data types and sources within a single study. This may be driven by complex water management questions which integrate water quantity, quality and ecology; or by recognition of the value of auxiliary data to understand hydrological processes. We discuss briefly the impact of data uncertainty on the increasingly popular use of diagnostic signatures for hydrological process understanding and model development.
Fisher, Moria E; Huang, Felix C; Wright, Zachary A; Patton, James L
2014-01-01
Manipulation of error feedback has been of great interest to recent studies in motor control and rehabilitation. Typically, motor adaptation is shown as a change in performance with a single scalar metric for each trial, yet such an approach might overlook details about how error evolves through the movement. We believe that statistical distributions of movement error through the extent of the trajectory can reveal unique patterns of adaption and possibly reveal clues to how the motor system processes information about error. This paper describes different possible ordinate domains, focusing on representations in time and state-space, used to quantify reaching errors. We hypothesized that the domain with the lowest amount of variability would lead to a predictive model of reaching error with the highest accuracy. Here we showed that errors represented in a time domain demonstrate the least variance and allow for the highest predictive model of reaching errors. These predictive models will give rise to more specialized methods of robotic feedback and improve previous techniques of error augmentation.
Model and experiments to optimize co-adaptation in a simplified myoelectric control system.
Couraud, M; Cattaert, D; Paclet, F; Oudeyer, P Y; de Rugy, A
2018-04-01
To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Sustained attention performance during sleep deprivation: evidence of state instability
NASA Technical Reports Server (NTRS)
Doran, S. M.; Van Dongen, H. P.; Dinges, D. F.
2001-01-01
Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the "state instability" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.
Gunter, Lacey; Zhu, Ji; Murphy, Susan
2012-01-01
For many years, subset analysis has been a popular topic for the biostatistics and clinical trials literature. In more recent years, the discussion has focused on finding subsets of genomes which play a role in the effect of treatment, often referred to as stratified or personalized medicine. Though highly sought after, methods for detecting subsets with altering treatment effects are limited and lacking in power. In this article we discuss variable selection for qualitative interactions with the aim to discover these critical patient subsets. We propose a new technique designed specifically to find these interaction variables among a large set of variables while still controlling for the number of false discoveries. We compare this new method against standard qualitative interaction tests using simulations and give an example of its use on data from a randomized controlled trial for the treatment of depression. PMID:22023676
Patterned wafer geometry grouping for improved overlay control
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Woo, Jaeson; Park, Junbeom; Song, Changrock; Anis, Fatima; Vukkadala, Pradeep; Jeon, Sanghuck; Choi, DongSub; Huang, Kevin; Heo, Hoyoung; Smith, Mark D.; Robinson, John C.
2017-03-01
Process-induced overlay errors from outside the litho cell have become a significant contributor to the overlay error budget including non-uniform wafer stress. Previous studies have shown the correlation between process-induced stress and overlay and the opportunity for improvement in process control, including the use of patterned wafer geometry (PWG) metrology to reduce stress-induced overlay signatures. Key challenges of volume semiconductor manufacturing are how to improve not only the magnitude of these signatures, but also the wafer to wafer variability. This work involves a novel technique of using PWG metrology to provide improved litho-control by wafer-level grouping based on incoming process induced overlay, relevant for both 3D NAND and DRAM. Examples shown in this study are from 19 nm DRAM manufacturing.
ERIC Educational Resources Information Center
Spinelli, Simona; Vasa, Roma A.; Joel, Suresh; Nelson, Tess E.; Pekar, James J.; Mostofsky, Stewart H.
2011-01-01
Background: Error processing is reflected, behaviorally, by slower reaction times (RT) on trials immediately following an error (post-error). Children with attention-deficit hyperactivity disorder (ADHD) fail to show RT slowing and demonstrate increased intra-subject variability (ISV) on post-error trials. The neural correlates of these behavioral…
Local position control: A new concept for control of manipulators
NASA Technical Reports Server (NTRS)
Kelly, Frederick A.
1988-01-01
Resolved motion rate control is currently one of the most frequently used methods of manipulator control. It is currently used in the Space Shuttle remote manipulator system (RMS) and in prosthetic devices. Position control is predominately used in locating the end-effector of an industrial manipulator along a path with prescribed timing. In industrial applications, resolved motion rate control is inappropriate since position error accumulates. This is due to velocity being the control variable. In some applications this property is an advantage rather than a disadvantage. It may be more important for motion to end as soon as the input command is removed rather than reduce the position error to zero. Local position control is a new concept for manipulator control which retains the important properties of resolved motion rate control, but reduces the drift. Local position control can be considered to be a generalization of resolved position and resolved rate control. It places both control schemes on a common mathematical basis.
2013-01-01
Background Vibration is known to alter proprioceptive afferents and create a tonic vibration reflex. The control of force and its variability are often considered determinants of motor performance and neuromuscular control. However, the effect of vibration on paraspinal muscle control and force production remains to be determined. Methods Twenty-one healthy adults were asked to perform isometric trunk flexion and extension torque at 60% of their maximal voluntary isometric contraction, under three different vibration conditions: no vibration, vibration frequencies of 30 Hz and 80 Hz. Eighteen isometric contractions were performed under each condition without any feedback. Mechanical vibrations were applied bilaterally over the lumbar erector spinae muscles while participants were in neutral standing position. Time to peak torque (TPT), variable error (VE) as well as constant error (CE) and absolute error (AE) in peak torque were calculated and compared between conditions. Results The main finding suggests that erector spinae muscle vibration significantly decreases the accuracy in a trunk extension isometric force reproduction task. There was no difference between both vibration frequencies with regard to force production parameters. Antagonist muscles do not seem to be directly affected by vibration stimulation when performing a trunk isometric task. Conclusions The results suggest that acute erector spinae muscle vibration interferes with torque generation sequence of the trunk by distorting proprioceptive information in healthy participants. PMID:23919578
NASA Technical Reports Server (NTRS)
Miller, N. J.; Chuss, D. T.; Marriage, T. A.; Wollack, E. J.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Fixsen, D. J.; Harrington, K.;
2016-01-01
Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/ f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r= 0.01. Indeed, r less than 0.01 is achievable with commensurately improved characterizations and controls.
Modular Battery Charge Controller
NASA Technical Reports Server (NTRS)
Button, Robert; Gonzalez, Marcelo
2009-01-01
A new approach to masterless, distributed, digital-charge control for batteries requiring charge control has been developed and implemented. This approach is required in battery chemistries that need cell-level charge control for safety and is characterized by the use of one controller per cell, resulting in redundant sensors for critical components, such as voltage, temperature, and current. The charge controllers in a given battery interact in a masterless fashion for the purpose of cell balancing, charge control, and state-of-charge estimation. This makes the battery system invariably fault-tolerant. The solution to the single-fault failure, due to the use of a single charge controller (CC), was solved by implementing one CC per cell and linking them via an isolated communication bus [e.g., controller area network (CAN)] in a masterless fashion so that the failure of one or more CCs will not impact the remaining functional CCs. Each micro-controller-based CC digitizes the cell voltage (V(sub cell)), two cell temperatures, and the voltage across the switch (V); the latter variable is used in conjunction with V(sub cell) to estimate the bypass current for a given bypass resistor. Furthermore, CC1 digitizes the battery current (I1) and battery voltage (V(sub batt) and CC5 digitizes a second battery current (I2). As a result, redundant readings are taken for temperature, battery current, and battery voltage through the summation of the individual cell voltages given that each CC knows the voltage of the other cells. For the purpose of cell balancing, each CC periodically and independently transmits its cell voltage and stores the received cell voltage of the other cells in an array. The position in the array depends on the identifier (ID) of the transmitting CC. After eight cell voltage receptions, the array is checked to see if one or more cells did not transmit. If one or more transmissions are missing, the missing cell(s) is (are) eliminated from cell-balancing calculations. The cell-balancing algorithm is based on the error between the cell s voltage and the other cells and is categorized into four zones of operation. The algorithm is executed every second and, if cell balancing is activated, the error variable is set to a negative low value. The largest error between the cell and the other cells is found and the zone of operation determined. If the error is zero or negative, then the cell is at the lowest voltage and no balancing action is needed. If the error is less than a predetermined negative value, a Cell Bad Flag is set. If the error is positive, then cell balancing is needed, but a hysteretic zone is added to prevent the bypass circuit from triggering repeatedly near zero error. This approach keeps the cells within a predetermined voltage range.
Second order sliding mode control for a quadrotor UAV.
Zheng, En-Hui; Xiong, Jing-Jing; Luo, Ji-Liang
2014-07-01
A method based on second order sliding mode control (2-SMC) is proposed to design controllers for a small quadrotor UAV. For the switching sliding manifold design, the selection of the coefficients of the switching sliding manifold is in general a sophisticated issue because the coefficients are nonlinear. In this work, in order to perform the position and attitude tracking control of the quadrotor perfectly, the dynamical model of the quadrotor is divided into two subsystems, i.e., a fully actuated subsystem and an underactuated subsystem. For the former, a sliding manifold is defined by combining the position and velocity tracking errors of one state variable, i.e., the sliding manifold has two coefficients. For the latter, a sliding manifold is constructed via a linear combination of position and velocity tracking errors of two state variables, i.e., the sliding manifold has four coefficients. In order to further obtain the nonlinear coefficients of the sliding manifold, Hurwitz stability analysis is used to the solving process. In addition, the flight controllers are derived by using Lyapunov theory, which guarantees that all system state trajectories reach and stay on the sliding surfaces. Extensive simulation results are given to illustrate the effectiveness of the proposed control method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Integrating models that depend on variable data
NASA Astrophysics Data System (ADS)
Banks, A. T.; Hill, M. C.
2016-12-01
Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.
Modeling, Analysis, and Control of Demand Response Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathieu, Johanna L.
2012-05-01
While the traditional goal of an electric power system has been to control supply to fulfill demand, the demand-side can plan an active role in power systems via Demand Response (DR), defined by the Department of Energy (DOE) as “a tariff or program established to motivate changes in electric use by end-use customers in response to changes in the price of electricity over time, or to give incentive payments designed to induce lower electricity use at times of high market prices or when grid reliability is jeopardized” [29]. DR can provide a variety of benefits including reducing peak electric loadsmore » when the power system is stressed and fast timescale energy balancing. Therefore, DR can improve grid reliability and reduce wholesale energy prices and their volatility. This dissertation focuses on analyzing both recent and emerging DR paradigms. Recent DR programs have focused on peak load reduction in commercial buildings and industrial facilities (C&I facilities). We present methods for using 15-minute-interval electric load data, commonly available from C&I facilities, to help building managers understand building energy consumption and ‘ask the right questions’ to discover opportunities for DR. Additionally, we present a regression-based model of whole building electric load, i.e., a baseline model, which allows us to quantify DR performance. We use this baseline model to understand the performance of 38 C&I facilities participating in an automated dynamic pricing DR program in California. In this program, facilities are expected to exhibit the same response each DR event. We find that baseline model error makes it difficult to precisely quantify changes in electricity consumption and understand if C&I facilities exhibit event-to-event variability in their response to DR signals. Therefore, we present a method to compute baseline model error and a metric to determine how much observed DR variability results from baseline model error rather than real variability in response. We find that, in general, baseline model error is large. Though some facilities exhibit real DR variability, most observed variability results from baseline model error. In some cases, however, aggregations of C&I facilities exhibit real DR variability, which could create challenges for power system operation. These results have implications for DR program design and deployment. Emerging DR paradigms focus on faster timescale DR. Here, we investigate methods to coordinate aggregations of residential thermostatically controlled loads (TCLs), including air conditioners and refrigerators, to manage frequency and energy imbalances in power systems. We focus on opportunities to centrally control loads with high accuracy but low requirements for sensing and communications infrastructure. Specifically, we compare cases when measured load state information (e.g., power consumption and temperature) is 1) available in real time; 2) available, but not in real time; and 3) not available. We develop Markov Chain models to describe the temperature state evolution of heterogeneous populations of TCLs, and use Kalman filtering for both state and joint parameter/state estimation. We present a look-ahead proportional controller to broadcast control signals to all TCLs, which always remain in their temperature dead-band. Simulations indicate that it is possible to achieve power tracking RMS errors in the range of 0.26–9.3% of steady state aggregated power consumption. Results depend upon the information available for system identification, state estimation, and control. We find that, depending upon the performance required, TCLs may not need to provide state information to the central controller in real time or at all. We also estimate the size of the TCL potential resource; potential revenue from participation in markets; and break-even costs associated with deploying DR-enabling technologies. We find that current TCL energy storage capacity in California is 8–11 GWh, with refrigerators contributing the most. Annual revenues from participation in regulation vary from $10 to $220 per TCL per year depending upon the type of TCL and climate zone, while load following and arbitrage revenues are more modest at $2 to $35 per TCL per year. These results lead to a number of policy recommendations that will make it easier to engage residential loads in fast timescale DR.« less
Sirgo, Gonzalo; Esteban, Federico; Gómez, Josep; Moreno, Gerard; Rodríguez, Alejandro; Blanch, Lluis; Guardiola, Juan José; Gracia, Rafael; De Haro, Lluis; Bodí, María
2018-04-01
Big data analytics promise insights into healthcare processes and management, improving outcomes while reducing costs. However, data quality is a major challenge for reliable results. Business process discovery techniques and an associated data model were used to develop data management tool, ICU-DaMa, for extracting variables essential for overseeing the quality of care in the intensive care unit (ICU). To determine the feasibility of using ICU-DaMa to automatically extract variables for the minimum dataset and ICU quality indicators from the clinical information system (CIS). The Wilcoxon signed-rank test and Fisher's exact test were used to compare the values extracted from the CIS with ICU-DaMa for 25 variables from all patients attended in a polyvalent ICU during a two-month period against the gold standard of values manually extracted by two trained physicians. Discrepancies with the gold standard were classified into plausibility, conformance, and completeness errors. Data from 149 patients were included. Although there were no significant differences between the automatic method and the manual method, we detected differences in values for five variables, including one plausibility error and two conformance and completeness errors. Plausibility: 1) Sex, ICU-DaMa incorrectly classified one male patient as female (error generated by the Hospital's Admissions Department). Conformance: 2) Reason for isolation, ICU-DaMa failed to detect a human error in which a professional misclassified a patient's isolation. 3) Brain death, ICU-DaMa failed to detect another human error in which a professional likely entered two mutually exclusive values related to the death of the patient (brain death and controlled donation after circulatory death). Completeness: 4) Destination at ICU discharge, ICU-DaMa incorrectly classified two patients due to a professional failing to fill out the patient discharge form when thepatients died. 5) Length of continuous renal replacement therapy, data were missing for one patient because the CRRT device was not connected to the CIS. Automatic generation of minimum dataset and ICU quality indicators using ICU-DaMa is feasible. The discrepancies were identified and can be corrected by improving CIS ergonomics, training healthcare professionals in the culture of the quality of information, and using tools for detecting and correcting data errors. Copyright © 2018 Elsevier B.V. All rights reserved.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Are there meaningful individual differences in temporal inconsistency in self-reported personality?
Soubelet, Andrea; Salthouse, Timothy A; Oishi, Shigehiro
2014-11-01
The current project had three goals. The first was to examine whether it is meaningful to refer to across-time variability in self-reported personality as an individual differences characteristic. The second was to investigate whether negative affect was associated with variability in self-reported personality, while controlling for mean levels, and correcting for measurement errors. The third goal was to examine whether variability in self-reported personality would be larger among young adults than among older adults, and whether the relation of variability with negative affect would be stronger at older ages than at younger ages. Two moderately large samples of participants completed the International Item Pool Personality questionnaire assessing the Big Five personality dimensions either twice or thrice, in addition to several measures of negative affect. Results were consistent with the hypothesis that within-person variability in self-reported personality is a meaningful individual difference characteristic. Some people exhibited greater across-time variability than others after removing measurement error, and people who showed temporal instability in one trait also exhibited temporal instability across the other four traits. However, temporal variability was not related to negative affect, and there was no evidence that either temporal variability or its association with negative affect varied with age.
Flouri, Eirini; Panourgia, Constantina
2011-06-01
The aim of this study was to test for gender differences in how negative cognitive errors (overgeneralizing, catastrophizing, selective abstraction, and personalizing) mediate the association between adverse life events and adolescents' emotional and behavioural problems (measured with the Strengths and Difficulties Questionnaire). The sample consisted of 202 boys and 227 girls (aged 11-15 years) from three state secondary schools in disadvantaged areas in one county in the South East of England. Control variables were age, ethnicity, special educational needs, exclusion history, family structure, family socio-economic disadvantage, and verbal cognitive ability. Adverse life events were measured with Tiet et al.'s (1998) Adverse Life Events Scale. For both genders, we assumed a pathway from adverse life events to emotional and behavioural problems via cognitive errors. We found no gender differences in life adversity, cognitive errors, total difficulties, peer problems, or hyperactivity. In both boys and girls, even after adjustment for controls, cognitive errors were related to total difficulties and emotional symptoms, and life adversity was related to total difficulties and conduct problems. The life adversity/conduct problems association was not explained by negative cognitive errors in either gender. However, we found gender differences in how adversity and cognitive errors produced hyperactivity and internalizing problems. In particular, life adversity was not related, after adjustment for controls, to hyperactivity in girls and to peer problems and emotional symptoms in boys. Cognitive errors fully mediated the effect of life adversity on hyperactivity in boys and on peer and emotional problems in girls.
Verification of models for ballistic movement time and endpoint variability.
Lin, Ray F; Drury, Colin G
2013-01-01
A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.
Examining Impulse-Variability in Kicking.
Chappell, Andrew; Molina, Sergio L; McKibben, Jonathon; Stodden, David F
2016-07-01
This study examined variability in kicking speed and spatial accuracy to test the impulse-variability theory prediction of an inverted-U function and the speed-accuracy trade-off. Twenty-eight 18- to 25-year-old adults kicked a playground ball at various percentages (50-100%) of their maximum speed at a wall target. Speed variability and spatial error were analyzed using repeated-measures ANOVA with built-in polynomial contrasts. Results indicated a significant inverse linear trajectory for speed variability (p < .001, η2= .345) where 50% and 60% maximum speed had significantly higher variability than the 100% condition. A significant quadratic fit was found for spatial error scores of mean radial error (p < .0001, η2 = .474) and subject-centroid radial error (p < .0001, η2 = .453). Findings suggest variability and accuracy of multijoint, ballistic skill performance may not follow the general principles of impulse-variability theory or the speed-accuracy trade-off.
Neuromotor Noise Is Malleable by Amplifying Perceived Errors
Zhang, Zhaoran; Abe, Masaki O.; Sternad, Dagmar
2016-01-01
Variability in motor performance results from the interplay of error correction and neuromotor noise. This study examined whether visual amplification of error, previously shown to improve performance, affects not only error correction, but also neuromotor noise, typically regarded as inaccessible to intervention. Seven groups of healthy individuals, with six participants in each group, practiced a virtual throwing task for three days until reaching a performance plateau. Over three more days of practice, six of the groups received different magnitudes of visual error amplification; three of these groups also had noise added. An additional control group was not subjected to any manipulations for all six practice days. The results showed that the control group did not improve further after the first three practice days, but the error amplification groups continued to decrease their error under the manipulations. Analysis of the temporal structure of participants’ corrective actions based on stochastic learning models revealed that these performance gains were attained by reducing neuromotor noise and, to a considerably lesser degree, by increasing the size of corrective actions. Based on these results, error amplification presents a promising intervention to improve motor function by decreasing neuromotor noise after performance has reached an asymptote. These results are relevant for patients with neurological disorders and the elderly. More fundamentally, these results suggest that neuromotor noise may be accessible to practice interventions. PMID:27490197
Writing executable assertions to test flight software
NASA Technical Reports Server (NTRS)
Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.
1984-01-01
An executable assertion is a logical statement about the variables or a block of code. If there is no error during execution, the assertion statement results in a true value. Executable assertions can be used for dynamic testing of software. They can be employed for validation during the design phase, and exception and error detection during the operation phase. The present investigation is concerned with the problem of writing executable assertions, taking into account the use of assertions for testing flight software. They can be employed for validation during the design phase, and for exception handling and error detection during the operation phase The digital flight control system and the flight control software are discussed. The considered system provides autopilot and flight director modes of operation for automatic and manual control of the aircraft during all phases of flight. Attention is given to techniques for writing and using assertions to test flight software, an experimental setup to test flight software, and language features to support efficient use of assertions.
Robust preview control for a class of uncertain discrete-time systems with time-varying delay.
Li, Li; Liao, Fucheng
2018-02-01
This paper proposes a concept of robust preview tracking control for uncertain discrete-time systems with time-varying delay. Firstly, a model transformation is employed for an uncertain discrete system with time-varying delay. Then, the auxiliary variables related to the system state and input are introduced to derive an augmented error system that includes future information on the reference signal. This leads to the tracking problem being transformed into a regulator problem. Finally, for the augmented error system, a sufficient condition of asymptotic stability is derived and the preview controller design method is proposed based on the scaled small gain theorem and linear matrix inequality (LMI) technique. The method proposed in this paper not only solves the difficulty problem of applying the difference operator to the time-varying matrices but also simplifies the structure of the augmented error system. The numerical simulation example also illustrates the effectiveness of the results presented in the paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
The perils of the imperfect expectation of the perfect baby.
Chervenak, Frank A; McCullough, Laurence B; Brent, Robert L
2010-08-01
Advances in modern medicine invite the assumption that medicine can control human biology. There is a perilous logic that leads from expectations of medicine's control over reproductive biology to the expectation of having a perfect baby. This article proposes that obstetricians should take a preventive ethics approach to the care of pregnant women with expectations for a perfect baby. We use Nathaniel Hawthorne's classic short story, "The Birthmark," to illustrate the perils of the logic of control and perfection through science and then identify possible contemporary sources of the expectation of the perfect baby. We propose that the informed consent process should be used as a preventive ethics tool throughout the course of pregnancy to educate pregnant women about the inherent errors of human reproduction, the highly variable clinical outcomes of these errors, the limited capacity of medicine to detect these errors, and the even more limited capacity to correct them. Copyright (c) 2010 Mosby, Inc. All rights reserved.
Understanding human management of automation errors
McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.
2013-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042
Understanding human management of automation errors.
McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D
2014-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.
Zouari, Farouk; Ibeas, Asier; Boulkroune, Abdesselem; Cao, Jinde; Mehdi Arefi, Mohammad
2018-06-01
This study addresses the issue of the adaptive output tracking control for a category of uncertain nonstrict-feedback delayed incommensurate fractional-order systems in the presence of nonaffine structures, unmeasured pseudo-states, unknown control directions, unknown actuator nonlinearities and output constraints. Firstly, the mean value theorem and the Gaussian error function are introduced to eliminate the difficulties that arise from the nonaffine structures and the unknown actuator nonlinearities, respectively. Secondly, the immeasurable tracking error variables are suitably estimated by constructing a fractional-order linear observer. Thirdly, the neural network, the Razumikhin Lemma, the variable separation approach, and the smooth Nussbaum-type function are used to deal with the uncertain nonlinear dynamics, the unknown time-varying delays, the nonstrict feedback and the unknown control directions, respectively. Fourthly, asymmetric barrier Lyapunov functions are employed to overcome the violation of the output constraints and to tune online the parameters of the adaptive neural controller. Through rigorous analysis, it is proved that the boundedness of all variables in the closed-loop system and the semi global asymptotic tracking are ensured without transgression of the constraints. The principal contributions of this study can be summarized as follows: (1) based on Caputo's definitions and new lemmas, methods concerning the controllability, observability and stability analysis of integer-order systems are extended to fractional-order ones, (2) the output tracking objective for a relatively large class of uncertain systems is achieved with a simple controller and less tuning parameters. Finally, computer-simulation studies from the robotic field are given to demonstrate the effectiveness of the proposed controller. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas
2016-12-01
There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The systematic comparisons of 12 models provided valuable evidence that the respective error-propagation was not only determined by the degree of positional inaccuracy inherent in the landslide data, but also by the spatial representation of landslides and the environment, landslide magnitude, the characteristics of the study area, the selected classification method and an interplay of predictors within multiple variable models. Based on the results, we deduced that a direct propagation of minor to moderate inventory-based positional errors into modelling results can be partly counteracted by adapting the modelling design (e.g. generalization of input data, opting for strongly generalizing classifiers). Since positional errors within landslide inventories are common and subsequent modelling and validation results are likely to be distorted, the potential existence of inventory-based positional inaccuracies should always be considered when assessing landslide susceptibility by means of empirical models.
Molina, Sergio L; Stodden, David F
2018-04-01
This study examined variability in throwing speed and spatial error to test the prediction of an inverted-U function (i.e., impulse-variability [IV] theory) and the speed-accuracy trade-off. Forty-five 9- to 11-year-old children were instructed to throw at a specified percentage of maximum speed (45%, 65%, 85%, and 100%) and hit the wall target. Results indicated no statistically significant differences in variable error across the target conditions (p = .72), failing to support the inverted-U hypothesis. Spatial accuracy results indicated no statistically significant differences with mean radial error (p = .18), centroid radial error (p = .13), and bivariate variable error (p = .08) also failing to support the speed-accuracy trade-off in overarm throwing. As neither throwing performance variability nor accuracy changed across percentages of maximum speed in this sample of children as well as in a previous adult sample, current policy and practices of practitioners may need to be reevaluated.
Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.
McShane, L M; Clark, L C; Combs, G F; Turnbull, B W
1991-06-01
Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.
Adjusting for radiotelemetry error to improve estimates of habitat use.
Scott L. Findholt; Bruce K. Johnson; Lyman L. McDonald; John W. Kern; Alan Ager; Rosemary J. Stussy; Larry D. Bryant
2002-01-01
Animal locations estimated from radiotelemetry have traditionally been treated as error-free when analyzed in relation to habitat variables. Location error lowers the power of statistical tests of habitat selection. We describe a method that incorporates the error surrounding point estimates into measures of environmental variables determined from a geographic...
Albrecht, Bjoern; Brandeis, Daniel; Uebel, Henrik; Heinrich, Hartmut; Mueller, Ueli C.; Hasselhorn, Marcus; Steinhausen, Hans-Christoph; Rothenberger, Aribert; Banaschewski, Tobias
2008-01-01
Background Attention deficit/hyperactivity disorder is a very common and highly heritable child psychiatric disorder associated with dysfunctions in fronto-striatal networks that control attention and response organisation. Aim of this study was to investigate whether features of action monitoring related to dopaminergic functions represent endophenotypes which are brain functions on the pathway from genes and environmental risk factors to behaviour. Methods Action monitoring and error processing as indicated by behavioural and electrophysiological parameters during a flanker task were examined in boys with ADHD combined type according to DSM-IV (N=68), their nonaffected siblings (N=18) and healthy controls with no known family history of ADHD (N=22). Results Boys with ADHD displayed slower and more variable reaction-times. Error negativity (Ne) was smaller in boys with ADHD compared to healthy controls, while nonaffected siblings displayed intermediate amplitudes following a linear model predicted by genetic concordance. The three groups did not differ on error positivity (Pe). N2 amplitude enhancement due to conflict (incongruent flankers) was reduced in the ADHD group. Nonaffected siblings also displayed intermediate N2 enhancement. Conclusions Converging evidence from behavioural and ERP findings suggests that action monitoring and initial error processing, both related to dopaminergically modulated functions of anterior cingulate cortex, might be an endophenotype related to ADHD. PMID:18339358
Strength training improves the tri-digit finger-pinch force control of older adults.
Keogh, Justin W; Morrison, Steve; Barrett, Rod
2007-08-01
To investigate the effect of unilateral upper-limb strength training on the finger-pinch force control of older men. Pretest and post-test 6-week intervention study. Exercise science research laboratory. Eleven neurologically fit older men (age range, 70-80y). The strength training group (n=7) trained twice a week for 6 weeks, performing dumbbell bicep curls, wrist flexions, and wrists extensions, while the control group subjects (n=4) maintained their normal activities. Changes in force variability, targeting error, peak power frequency, proportional power, sample entropy, digit force sharing, and coupling relations were assessed during a series of finger-pinch tasks. These tasks involved maintaining a constant or sinusoidal force output at 20% and 40% of each subject's maximum voluntary contraction. All participants performed the finger-pinch tasks with both the preferred and nonpreferred limbs. Analysis of covariance for between-group change scores indicated that the strength training group (trained limb) experienced significantly greater reductions in finger-pinch force variability and targeting error, as well as significantly greater increases in finger-pinch force, sample entropy, bicep curl, and wrist flexion strength than did the control group. A nonspecific upper-limb strength-training program may improve the finger-pinch force control of older men.
On the primary variable switching technique for simulating unsaturated-saturated flows
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Perrochet, P.
Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.
Variability Analysis based on POSS1/POSS2 Photometry
NASA Astrophysics Data System (ADS)
Mickaelian, Areg M.; Sarkissian, Alain; Sinamyan, Parandzem K.
2012-04-01
We introduce accurate magnitudes as combined calculations from catalogues based on accurate measurements of POSS1- and POSS2-epoch plates. The photometric accuracy of various catalogues was established, and statistical weights for each of them have been calculated. To achieve the best possible magnitudes, we used weighted averaging of data from APM, MAPS, USNO-A2.0, USNO-B1.0 (for POSS1-epoch), and USNO-B1.0 and GSC 2.3.2 (for POSS2-epoch) catalogues. The r.m.s. accuracy of magnitudes achieved for POSS1 is 0.184 in B and 0.173 mag in R, or 0.138 in B and 0.128 in R for POSS2. By adopting those new magnitudes we examined the First Byurakan Survey (FBS) of blue stellar objects for variability, and uncovered 336 probable and possible variables among 1103 objects with POSS2-POSS1 >= 3σ of the errors, including 161 highly probable variables. We have developed methods to control and exclude accidental errors for any survey. We compared and combined our results with those given in Northern Sky Variability Survey (NSVS) database, and obtained firm candidates for variability. By such an approach it will be possible to conduct investigations of variability for large numbers of objects.
Verifying and Postprocesing the Ensemble Spread-Error Relationship
NASA Astrophysics Data System (ADS)
Hopson, Tom; Knievel, Jason; Liu, Yubao; Roux, Gregory; Wu, Wanli
2013-04-01
With the increased utilization of ensemble forecasts in weather and hydrologic applications, there is a need to verify their benefit over less expensive deterministic forecasts. One such potential benefit of ensemble systems is their capacity to forecast their own forecast error through the ensemble spread-error relationship. The paper begins by revisiting the limitations of the Pearson correlation alone in assessing this relationship. Next, we introduce two new metrics to consider in assessing the utility an ensemble's varying dispersion. We argue there are two aspects of an ensemble's dispersion that should be assessed. First, and perhaps more fundamentally: is there enough variability in the ensembles dispersion to justify the maintenance of an expensive ensemble prediction system (EPS), irrespective of whether the EPS is well-calibrated or not? To diagnose this, the factor that controls the theoretical upper limit of the spread-error correlation can be useful. Secondly, does the variable dispersion of an ensemble relate to variable expectation of forecast error? Representing the spread-error correlation in relation to its theoretical limit can provide a simple diagnostic of this attribute. A context for these concepts is provided by assessing two operational ensembles: 30-member Western US temperature forecasts for the U.S. Army Test and Evaluation Command and 51-member Brahmaputra River flow forecasts of the Climate Forecast and Applications Project for Bangladesh. Both of these systems utilize a postprocessing technique based on quantile regression (QR) under a step-wise forward selection framework leading to ensemble forecasts with both good reliability and sharpness. In addition, the methodology utilizes the ensemble's ability to self-diagnose forecast instability to produce calibrated forecasts with informative skill-spread relationships. We will describe both ensemble systems briefly, review the steps used to calibrate the ensemble forecast, and present verification statistics using error-spread metrics, along with figures from operational ensemble forecasts before and after calibration.
An unusual kind of complex synchronizations and its applications in secure communications
NASA Astrophysics Data System (ADS)
Mahmoud, Emad E.
2017-11-01
In this paper, we talk about the meaning of complex anti-syncrhonization (CAS) of hyperchaotic nonlinear frameworks comprehensive complex variables and indeterminate parameters. This sort of synchronization can break down just for complex nonlinear frameworks. The CAS contains or fuses two sorts of synchronizations (complete synchronization and anti-synchronization). In the CAS the attractors of the master and slave frameworks are moving opposite or orthogonal to each other with a similar form; this phenomenon does not exist in the literature. Upon confirmation of the Lyapunov function and a versatile control strategy, a plan is made to play out the CAS of two indistinguishable hyperchaotic attractors of these frameworks. The adequacy of the obtained results is shown by a simulation case. Numerical issues are plotted to decide state variables, synchronization errors, modules errors, and phases errors of those hyperchaotic attractors after synchronization to determine that the CAS is accomplished. The above outcomes will present the possible establishment to the secure communication applications. The CAS of hyperchaotic complex frameworks in which a state variable of the master framework synchronizes with an alternate state variable of the slave framework is an encouraging kind of synchronization as it contributes fantastic security in secure communications. Amid this secure communications, the synchronization between transmitter and collector is shut and message signs are recouped. The encryption and reclamation of the signs are reproduced numerically.
Fuzzy Rule Suram for Wood Drying
NASA Astrophysics Data System (ADS)
Situmorang, Zakarias
2017-12-01
Implemented of fuzzy rule must used a look-up table as defuzzification analysis. Look-up table is the actuator plant to doing the value of fuzzification. Rule suram based of fuzzy logic with variables of weather is temperature ambient and humidity ambient, it implemented for wood drying process. The membership function of variable of state represented in error value and change error with typical map of triangle and map of trapezium. Result of analysis to reach 4 fuzzy rule in 81 conditions to control the output system can be constructed in a number of way of weather and conditions of air. It used to minimum of the consumption of electric energy by heater. One cycle of schedule drying is a serial of condition of chamber to process as use as a wood species.
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.
1978-01-01
An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Pérez, Rafael
2017-04-01
The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey densities required to achieve a certain accuracy given the cross-sectional variability of a gully and the measurement method applied. References Casali, J., Loizu, J., Campo, M.A., De Santisteban, L.M., Alvarez-Mozos, J., 2006. Accuracy of methods for field assessment of rill and ephemeral gully erosion. Catena 67, 128-138. doi:10.1016/j.catena.2006.03.005
Mismeasurement and the resonance of strong confounders: correlated errors.
Marshall, J R; Hastrup, J L; Ross, J S
1999-07-01
Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.
Lobach, Iryna; Mallick, Bani; Carroll, Raymond J
2011-01-01
Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
Robust Control Analysis of Hydraulic Turbine Speed
NASA Astrophysics Data System (ADS)
Jekan, P.; Subramani, C.
2018-04-01
An effective control strategy for the hydro-turbine governor in time scenario is adjective for this paper. Considering the complex dynamic characteristic and the uncertainty of the hydro-turbine governor model and taking the static and dynamic performance of the governing system as the ultimate goal, the designed logic combined the classical PID control theory with artificial intelligence used to obtain the desired output. The used controller will be a variable control techniques, therefore, its parameters can be adaptively adjusted according to the information about the control error signal.
Reinforcement learning state estimator.
Morimoto, Jun; Doya, Kenji
2007-03-01
In this study, we propose a novel use of reinforcement learning for estimating hidden variables and parameters of nonlinear dynamical systems. A critical issue in hidden-state estimation is that we cannot directly observe estimation errors. However, by defining errors of observable variables as a delayed penalty, we can apply a reinforcement learning frame-work to state estimation problems. Specifically, we derive a method to construct a nonlinear state estimator by finding an appropriate feedback input gain using the policy gradient method. We tested the proposed method on single pendulum dynamics and show that the joint angle variable could be successfully estimated by observing only the angular velocity, and vice versa. In addition, we show that we could acquire a state estimator for the pendulum swing-up task in which a swing-up controller is also acquired by reinforcement learning simultaneously. Furthermore, we demonstrate that it is possible to estimate the dynamics of the pendulum itself while the hidden variables are estimated in the pendulum swing-up task. Application of the proposed method to a two-linked biped model is also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, N. J.; Marriage, T. A.; Appel, J. W.
2016-02-20
Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residualmore » modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.« less
Liquid Medication Dosing Errors by Hispanic Parents: Role of Health Literacy and English Proficiency
Harris, Leslie M.; Dreyer, Benard; Mendelsohn, Alan; Bailey, Stacy C.; Sanders, Lee M.; Wolf, Michael S.; Parker, Ruth M.; Patel, Deesha A.; Kim, Kwang Youn A.; Jimenez, Jessica J.; Jacobson, Kara; Smith, Michelle; Yin, H. Shonna
2016-01-01
Objective Hispanic parents in the US are disproportionately affected by low health literacy and limited English proficiency (LEP). We examined associations between health literacy, LEP, and liquid medication dosing errors in Hispanic parents. Methods Cross-sectional analysis of data from a multisite randomized controlled experiment to identify best practices for the labeling/dosing of pediatric liquid medications (SAFE Rx for Kids study); 3 urban pediatric clinics. Analyses were limited to Hispanic parents of children <8 years, with health literacy and LEP data (n=1126). Parents were randomized to 5 groups that varied by pairing of units of measurement on the label/dosing tool. Each parent measured 9 doses [3 amounts (2.5,5,7.5 mL) using 3 tools (2 syringes (0.2,0.5 mL increment), 1 cup)] in random order. Dependent variable: Dosing error=>20% dose deviation. Predictor variables: health literacy (Newest Vital Sign) [limited=0–3; adequate=4–6], LEP (speaks English less than “very well”). Results 83.1% made dosing errors (mean(SD) errors/parent=2.2(1.9)). Parents with limited health literacy and LEP had the greatest odds of making a dosing error compared to parents with adequate health literacy who were English proficient (% trials with errors/parent=28.8 vs. 12.9%; AOR=2.2[1.7–2.8]). Parents with limited health literacy who were English proficient were also more likely to make errors (% trials with errors/parent=18.8%; AOR=1.4[1.1–1.9]). Conclusion Dosing errors are common among Hispanic parents; those with both LEP and limited health literacy are at particular risk. Further study is needed to examine how the redesign of medication labels and dosing tools could reduce literacy and language-associated disparities in dosing errors. PMID:28477800
Testing Multiple Outcomes in Repeated Measures Designs
ERIC Educational Resources Information Center
Lix, Lisa M.; Sajobi, Tolulope
2010-01-01
This study investigates procedures for controlling the familywise error rate (FWR) when testing hypotheses about multiple, correlated outcome variables in repeated measures (RM) designs. A content analysis of RM research articles published in 4 psychology journals revealed that 3 quarters of studies tested hypotheses about 2 or more outcome…
Identifying Behavioral Measures of Stress in Individuals with Aphasia
ERIC Educational Resources Information Center
Laures-Gore, Jacqueline S.; DuBay, Michaela F.; Duff, Melissa C.; Buchanan, Tony W.
2010-01-01
Purpose: To develop valid indicators of stress in individuals with aphasia (IWA) by examining the relationship between certain language variables (error frequency [EF] and word productivity [WP]) and cortisol reactivity. Method: Fourteen IWA and 10 controls participated in a speaking task. Salivary cortisol was collected pre- and posttask. WP and…
Fitts, Douglas A
2017-09-21
The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, α o , constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the α o remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held α o constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ 2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ 2 . Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.
Implementation of smart phone video plethysmography and dependence on lighting parameters.
Fletcher, Richard Ribón; Chamberlain, Daniel; Paggi, Nicholas; Deng, Xinyue
2015-08-01
The remote measurement of heart rate (HR) and heart rate variability (HRV) via a digital camera (video plethysmography) has emerged as an area of great interest for biomedical and health applications. While a few implementations of video plethysmography have been demonstrated on smart phones under controlled lighting conditions, it has been challenging to create a general scalable solution due to the large variability in smart phone hardware performance, software architecture, and the variable response to lighting parameters. In this context, we present a selfcontained smart phone implementation of video plethysmography for Android OS, which employs both stochastic and deterministic algorithms, and we use this to study the effect of lighting parameters (illuminance, color spectrum) on the accuracy of the remote HR measurement. Using two different phone models, we present the median HR error for five different video plethysmography algorithms under three different types of lighting (natural sunlight, compact fluorescent, and halogen incandescent) and variations in brightness. For most algorithms, we found the optimum light brightness to be in the range 1000-4000 lux and the optimum lighting types to be compact fluorescent and natural light. Moderate errors were found for most algorithms with some devices under conditions of low-brightness (<;500 lux) and highbrightness (>4000 lux). Our analysis also identified camera frame rate jitter as a major source of variability and error across different phone models, but this can be largely corrected through non-linear resampling. Based on testing with six human subjects, our real-time Android implementation successfully predicted the measured HR with a median error of -0.31 bpm, and an inter-quartile range of 2.1bpm.
He, Jianbo; Li, Jijie; Huang, Zhongwen; Zhao, Tuanjie; Xing, Guangnan; Gai, Junyi; Guan, Rongzhan
2015-01-01
Experimental error control is very important in quantitative trait locus (QTL) mapping. Although numerous statistical methods have been developed for QTL mapping, a QTL detection model based on an appropriate experimental design that emphasizes error control has not been developed. Lattice design is very suitable for experiments with large sample sizes, which is usually required for accurate mapping of quantitative traits. However, the lack of a QTL mapping method based on lattice design dictates that the arithmetic mean or adjusted mean of each line of observations in the lattice design had to be used as a response variable, resulting in low QTL detection power. As an improvement, we developed a QTL mapping method termed composite interval mapping based on lattice design (CIMLD). In the lattice design, experimental errors are decomposed into random errors and block-within-replication errors. Four levels of block-within-replication errors were simulated to show the power of QTL detection under different error controls. The simulation results showed that the arithmetic mean method, which is equivalent to a method under random complete block design (RCBD), was very sensitive to the size of the block variance and with the increase of block variance, the power of QTL detection decreased from 51.3% to 9.4%. In contrast to the RCBD method, the power of CIMLD and the adjusted mean method did not change for different block variances. The CIMLD method showed 1.2- to 7.6-fold higher power of QTL detection than the arithmetic or adjusted mean methods. Our proposed method was applied to real soybean (Glycine max) data as an example and 10 QTLs for biomass were identified that explained 65.87% of the phenotypic variation, while only three and two QTLs were identified by arithmetic and adjusted mean methods, respectively.
Chow, Gary C C; Yam, Timothy T T; Chung, Joanne W Y; Fong, Shirley S M
2017-02-01
This single-blinded, three-armed randomized controlled trial aimed to compare the effects of postexercise ice-water immersion (IWI), room-temperature water immersion (RWI), and no water immersion on the balance performance and knee joint proprioception of amateur rugby players. Fifty-three eligible amateur rugby players (mean age ± standard deviation: 21.6 ± 2.9 years) were randomly assigned to the IWI group (5.3 °C), RWI group (25.0 °C), or the no immersion control group. The participants in each group underwent the same fatigue protocol followed by their allocated recovery intervention, which lasted for 1 minute. Measurements were taken before and after the fatigue-recovery intervention. The primary outcomes were the sensory organization test (SOT) composite equilibrium score (ES) and the condition-specific ES, which were measured using a computerized dynamic posturography machine. The secondary outcome was the knee joint repositioning error. Two-way repeated measures analysis of variance was used to test the effect of water immersion on each outcome variable. There were no significant within- and between-group differences in the SOT composite ESs or the condition-specific ESs. However, there was a group-by-time interaction effect on the knee joint repositioning error. It seems that participants in the RWI group had lower errors over time, but those in the IWI and control groups had increased errors over time. The RWI group had significantly lower error score than the IWI group at postintervention. One minute of postexercise IWI or RWI did not impair rugby players' sensory organization of balance control. RWI had a less detrimental effect on knee joint proprioception to IWI at postintervention.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Statistical Quality Control of Moisture Data in GEOS DAS
NASA Technical Reports Server (NTRS)
Dee, D. P.; Rukhovets, L.; Todling, R.
1999-01-01
A new statistical quality control algorithm was recently implemented in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The final step in the algorithm consists of an adaptive buddy check that either accepts or rejects outlier observations based on a local statistical analysis of nearby data. A basic assumption in any such test is that the observed field is spatially coherent, in the sense that nearby data can be expected to confirm each other. However, the buddy check resulted in excessive rejection of moisture data, especially during the Northern Hemisphere summer. The analysis moisture variable in GEOS DAS is water vapor mixing ratio. Observational evidence shows that the distribution of mixing ratio errors is far from normal. Furthermore, spatial correlations among mixing ratio errors are highly anisotropic and difficult to identify. Both factors contribute to the poor performance of the statistical quality control algorithm. To alleviate the problem, we applied the buddy check to relative humidity data instead. This variable explicitly depends on temperature and therefore exhibits a much greater spatial coherence. As a result, reject rates of moisture data are much more reasonable and homogeneous in time and space.
NASA Astrophysics Data System (ADS)
Fathalli, Bilel; Pohl, Benjamin; Castel, Thierry; Safi, Mohamed Jomâa
2018-02-01
Temporal and spatial variability of rainfall over Tunisia (at 12 km spatial resolution) is analyzed in a multi-year (1992-2011) ten-member ensemble simulation performed using the WRF model, and a sample of regional climate hindcast simulations from Euro-CORDEX. RCM errors and skills are evaluated against a dense network of local rain gauges. Uncertainties arising, on the one hand, from the different model configurations and, on the other hand, from internal variability are furthermore quantified and ranked at different timescales using simple spread metrics. Overall, the WRF simulation shows good skill for simulating spatial patterns of rainfall amounts over Tunisia, marked by strong altitudinal and latitudinal gradients, as well as the rainfall interannual variability, in spite of systematic errors. Mean rainfall biases are wet in both DJF and JJA seasons for the WRF ensemble, while they are dry in winter and wet in summer for most of the used Euro-CORDEX models. The sign of mean annual rainfall biases over Tunisia can also change from one member of the WRF ensemble to another. Skills in regionalizing precipitation over Tunisia are season dependent, with better correlations and weaker biases in winter. Larger inter-member spreads are observed in summer, likely because of (1) an attenuated large-scale control on Mediterranean and Tunisian climate, and (2) a larger contribution of local convective rainfall to the seasonal amounts. Inter-model uncertainties are globally stronger than those attributed to model's internal variability. However, inter-member spreads can be of the same magnitude in summer, emphasizing the important stochastic nature of the summertime rainfall variability over Tunisia.
Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Vadali, Srinivas R.; Markley, F. Landis
1999-01-01
An optimal control approach using variable-structure (sliding-mode) tracking for large angle spacecraft maneuvers is presented. The approach expands upon a previously derived regulation result using a quaternion parameterization for the kinematic equations of motion. This parameterization is used since it is free of singularities. The main contribution of this paper is the utilization of a simple term in the control law that produces a maneuver to the reference attitude trajectory in the shortest distance. Also, a multiplicative error quaternion between the desired and actual attitude is used to derive the control law. Sliding-mode switching surfaces are derived using an optimal-control analysis. Control laws are given using either external torque commands or reaction wheel commands. Global asymptotic stability is shown for both cases using a Lyapunov analysis. Simulation results are shown which use the new control strategy to stabilize the motion of the Microwave Anisotropy Probe spacecraft.
Evaluation of tactual displays for flight control
NASA Technical Reports Server (NTRS)
Levison, W. H.; Tanner, R. B.; Triggs, T. J.
1973-01-01
Manual tracking experiments were conducted to determine the suitability of tactual displays for presenting flight-control information in multitask situations. Although tracking error scores are considerably greater than scores obtained with a continuous visual display, preliminary results indicate that inter-task interference effects are substantially less with the tactual display in situations that impose high visual scanning workloads. The single-task performance degradation found with the tactual display appears to be a result of the coding scheme rather than the use of the tactual sensory mode per se. Analysis with the state-variable pilot/vehicle model shows that reliable predictions of tracking errors can be obtained for wide-band tracking systems once the pilot-related model parameters have been adjusted to reflect the pilot-display interaction.
Quality Control Methodology Of A Surface Wind Observational Database In North Eastern North America
NASA Astrophysics Data System (ADS)
Lucio-Eceiza, Etor E.; Fidel González-Rouco, J.; Navarro, Jorge; Conte, Jorge; Beltrami, Hugo
2016-04-01
This work summarizes the design and application of a Quality Control (QC) procedure for an observational surface wind database located in North Eastern North America. The database consists of 526 sites (486 land stations and 40 buoys) with varying resolutions of hourly, 3 hourly and 6 hourly data, compiled from three different source institutions with uneven measurement units and changing measuring procedures, instrumentation and heights. The records span from 1953 to 2010. The QC process is composed of different phases focused either on problems related with the providing source institutions or measurement errors. The first phases deal with problems often related with data recording and management: (1) compilation stage dealing with the detection of typographical errors, decoding problems, site displacements and unification of institutional practices; (2) detection of erroneous data sequence duplications within a station or among different ones; (3) detection of errors related with physically unrealistic data measurements. The last phases are focused on instrumental errors: (4) problems related with low variability, placing particular emphasis on the detection of unrealistic low wind speed records with the help of regional references; (5) high variability related erroneous records; (6) standardization of wind speed record biases due to changing measurement heights, detection of wind speed biases on week to monthly timescales, and homogenization of wind direction records. As a result, around 1.7% of wind speed records and 0.4% of wind direction records have been deleted, making a combined total of 1.9% of removed records. Additionally, around 15.9% wind speed records and 2.4% of wind direction data have been also corrected.
Model and experiments to optimize co-adaptation in a simplified myoelectric control system
NASA Astrophysics Data System (ADS)
Couraud, M.; Cattaert, D.; Paclet, F.; Oudeyer, P. Y.; de Rugy, A.
2018-04-01
Objective. To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. Approach. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. Results. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. Significance. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.
Consensus of satellite cluster flight using an energy-matching optimal control method
NASA Astrophysics Data System (ADS)
Luo, Jianjun; Zhou, Liang; Zhang, Bo
2017-11-01
This paper presents an optimal control method for consensus of satellite cluster flight under a kind of energy matching condition. Firstly, the relation between energy matching and satellite periodically bounded relative motion is analyzed, and the satellite energy matching principle is applied to configure the initial conditions. Then, period-delayed errors are adopted as state variables to establish the period-delayed errors dynamics models of a single satellite and the cluster. Next a novel satellite cluster feedback control protocol with coupling gain is designed, so that the satellite cluster periodically bounded relative motion consensus problem (period-delayed errors state consensus problem) is transformed to the stability of a set of matrices with the same low dimension. Based on the consensus region theory in the research of multi-agent system consensus issues, the coupling gain can be obtained to satisfy the requirement of consensus region and decouple the satellite cluster information topology and the feedback control gain matrix, which can be determined by Linear quadratic regulator (LQR) optimal method. This method can realize the consensus of satellite cluster period-delayed errors, leading to the consistency of semi-major axes (SMA) and the energy-matching of satellite cluster. Then satellites can emerge the global coordinative cluster behavior. Finally the feasibility and effectiveness of the present energy-matching optimal consensus for satellite cluster flight is verified through numerical simulations.
COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.
Hromadka, T.V.; Yen, C.C.; Guymon, G.L.
1985-01-01
The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.
Method for controlling a vehicle with two or more independently steered wheels
Reister, D.B.; Unseren, M.A.
1995-03-28
A method is described for independently controlling each steerable drive wheel of a vehicle with two or more such wheels. An instantaneous center of rotation target and a tangential velocity target are inputs to a wheel target system which sends the velocity target and a steering angle target for each drive wheel to a pseudo-velocity target system. The pseudo-velocity target system determines a pseudo-velocity target which is compared to a current pseudo-velocity to determine a pseudo-velocity error. The steering angle targets and the steering angles are inputs to a steering angle control system which outputs to the steering angle encoders, which measure the steering angles. The pseudo-velocity error, the rate of change of the pseudo-velocity error, and the wheel slip between each pair of drive wheels are used to calculate intermediate control variables which, along with the steering angle targets are used to calculate the torque to be applied at each wheel. The current distance traveled for each wheel is then calculated. The current wheel velocities and steering angle targets are used to calculate the cumulative and instantaneous wheel slip and the current pseudo-velocity. 6 figures.
A vision-based end-point control for a two-link flexible manipulator. M.S. Thesis
NASA Technical Reports Server (NTRS)
Obergfell, Klaus
1991-01-01
The measurement and control of the end-effector position of a large two-link flexible manipulator are investigated. The system implementation is described and an initial algorithm for static end-point positioning is discussed. Most existing robots are controlled through independent joint controllers, while the end-effector position is estimated from the joint positions using a kinematic relation. End-point position feedback can be used to compensate for uncertainty and structural deflections. Such feedback is especially important for flexible robots. Computer vision is utilized to obtain end-point position measurements. A look-and-move control structure alleviates the disadvantages of the slow and variable computer vision sampling frequency. This control structure consists of an inner joint-based loop and an outer vision-based loop. A static positioning algorithm was implemented and experimentally verified. This algorithm utilizes the manipulator Jacobian to transform a tip position error to a joint error. The joint error is then used to give a new reference input to the joint controller. The convergence of the algorithm is demonstrated experimentally under payload variation. A Landmark Tracking System (Dickerson, et al 1990) is used for vision-based end-point measurements. This system was modified and tested. A real-time control system was implemented on a PC and interfaced with the vision system and the robot.
The Challenges of Measuring Glycemic Variability
Rodbard, David
2012-01-01
This commentary reviews several of the challenges encountered when attempting to quantify glycemic variability and correlate it with risk of diabetes complications. These challenges include (1) immaturity of the field, including problems of data accuracy, precision, reliability, cost, and availability; (2) larger relative error in the estimates of glycemic variability than in the estimates of the mean glucose; (3) high correlation between glycemic variability and mean glucose level; (4) multiplicity of measures; (5) correlation of the multiple measures; (6) duplication or reinvention of methods; (7) confusion of measures of glycemic variability with measures of quality of glycemic control; (8) the problem of multiple comparisons when assessing relationships among multiple measures of variability and multiple clinical end points; and (9) differing needs for routine clinical practice and clinical research applications. PMID:22768904
Economic optimization of operations for hybrid energy systems under variable markets
Chen, Jen; Garcia, Humberto E.
2016-05-21
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
Economic optimization of operations for hybrid energy systems under variable markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jen; Garcia, Humberto E.
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
A comparative analysis of errors in long-term econometric forecasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tepel, R.
1986-04-01
The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less
Pyo, Geunyeong; Elble, Rodger J; Ala, Thomas; Markwell, Stephen J
2006-01-01
The performances of the uncertain/mild cognitive impairment (MCI) patients on the Alzheimer Disease Assessment Scale-Cognitive (ADAS-Cog) subscale were compared with those of normal controls, Alzheimer disease patients with CDR 0.5, and Alzheimer disease patients with CDR 1.0. The Uncertain/MCI group was significantly different from normal controls and Alzheimer disease CDR 0.5 or 1.0 groups on the ADAS-Cog except on a few non-memory subtests. Age was significantly correlated with total error score in the normal group, but there was no significant correlation between age and ADAS-Cog scores in the patient groups. Education was not significantly correlated with the ADAS-Cog scores in any group. Regardless of age and educational level, there were clear differences between the normal group and the Uncertain/MCI group, especially on the total error scores. We found that the total error score of the ADAS-Cog was the most reliable variable in detecting patients with mild cognitive impairment. The present study demonstrated that the ADAS-Cog is a promising tool for detecting and studying patients with mild cognitive impairment. The results also indicated that demographic variables such as age and education do not play a significant role in the diagnosis of mild cognitive impaired patients based on the ADAS-Cog scores.
Chow, Gary C.C.; Yam, Timothy T.T.; Chung, Joanne W.Y.; Fong, Shirley S.M.
2017-01-01
Abstract Background: This single-blinded, three-armed randomized controlled trial aimed to compare the effects of postexercise ice-water immersion (IWI), room-temperature water immersion (RWI), and no water immersion on the balance performance and knee joint proprioception of amateur rugby players. Methods: Fifty-three eligible amateur rugby players (mean age ± standard deviation: 21.6 ± 2.9 years) were randomly assigned to the IWI group (5.3 °C), RWI group (25.0 °C), or the no immersion control group. The participants in each group underwent the same fatigue protocol followed by their allocated recovery intervention, which lasted for 1 minute. Measurements were taken before and after the fatigue-recovery intervention. The primary outcomes were the sensory organization test (SOT) composite equilibrium score (ES) and the condition-specific ES, which were measured using a computerized dynamic posturography machine. The secondary outcome was the knee joint repositioning error. Two-way repeated measures analysis of variance was used to test the effect of water immersion on each outcome variable. Results: There were no significant within- and between-group differences in the SOT composite ESs or the condition-specific ESs. However, there was a group-by-time interaction effect on the knee joint repositioning error. It seems that participants in the RWI group had lower errors over time, but those in the IWI and control groups had increased errors over time. The RWI group had significantly lower error score than the IWI group at postintervention. Conclusion: One minute of postexercise IWI or RWI did not impair rugby players’ sensory organization of balance control. RWI had a less detrimental effect on knee joint proprioception to IWI at postintervention. PMID:28207546
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
Complacency and Automation Bias in the Use of Imperfect Automation.
Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L
2015-08-01
We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.
On the internal target model in a tracking task
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Baron, S.
1981-01-01
An optimal control model for predicting operator's dynamic responses and errors in target tracking ability is summarized. The model, which predicts asymmetry in the tracking data, is dependent on target maneuvers and trajectories. Gunners perception, decision making, control, and estimate of target positions and velocity related to crossover intervals are discussed. The model provides estimates for means, standard deviations, and variances for variables investigated and for operator estimates of future target positions and velocities.
Adaptive control of theophylline therapy: importance of blood sampling times.
D'Argenio, D Z; Khakmahd, K
1983-10-01
A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.
Some Insights of Spectral Optimization in Ocean Color Inversion
NASA Technical Reports Server (NTRS)
Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert
2011-01-01
In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.
NASA Astrophysics Data System (ADS)
James, M. R.; Robson, S.; d'Oleire-Oltmanns, S.; Niethammer, U.
2017-03-01
Structure-from-motion (SfM) algorithms greatly facilitate the production of detailed topographic models from photographs collected using unmanned aerial vehicles (UAVs). However, the survey quality achieved in published geomorphological studies is highly variable, and sufficient processing details are never provided to understand fully the causes of variability. To address this, we show how survey quality and consistency can be improved through a deeper consideration of the underlying photogrammetric methods. We demonstrate the sensitivity of digital elevation models (DEMs) to processing settings that have not been discussed in the geomorphological literature, yet are a critical part of survey georeferencing, and are responsible for balancing the contributions of tie and control points. We provide a Monte Carlo approach to enable geomorphologists to (1) carefully consider sources of survey error and hence increase the accuracy of SfM-based DEMs and (2) minimise the associated field effort by robust determination of suitable lower-density deployments of ground control. By identifying appropriate processing settings and highlighting photogrammetric issues such as over-parameterisation during camera self-calibration, processing artefacts are reduced and the spatial variability of error minimised. We demonstrate such DEM improvements with a commonly-used SfM-based software (PhotoScan), which we augment with semi-automated and automated identification of ground control points (GCPs) in images, and apply to two contrasting case studies - an erosion gully survey (Taroudant, Morocco) and an active landslide survey (Super-Sauze, France). In the gully survey, refined processing settings eliminated step-like artefacts of up to 50 mm in amplitude, and overall DEM variability with GCP selection improved from 37 to 16 mm. In the much more challenging landslide case study, our processing halved planimetric error to 0.1 m, effectively doubling the frequency at which changes in landslide velocity could be detected. In both case studies, the Monte Carlo approach provided a robust demonstration that field effort could by substantially reduced by only deploying approximately half the number of GCPs, with minimal effect on the survey quality. To reduce processing artefacts and promote confidence in SfM-based geomorphological surveys, published results should include processing details which include the image residuals for both tie points and GCPs, and ensure that these are considered appropriately within the workflow.
Measurement Model Specification Error in LISREL Structural Equation Models.
ERIC Educational Resources Information Center
Baldwin, Beatrice; Lomax, Richard
This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…
Baigzadehnoe, Barmak; Rahmani, Zahra; Khosravi, Alireza; Rezaie, Behrooz
2017-09-01
In this paper, the position and force tracking control problem of cooperative robot manipulator system handling a common rigid object with unknown dynamical models and unknown external disturbances is investigated. The universal approximation properties of fuzzy logic systems are employed to estimate the unknown system dynamics. On the other hand, by defining new state variables based on the integral and differential of position and orientation errors of the grasped object, the error system of coordinated robot manipulators is constructed. Subsequently by defining the appropriate change of coordinates and using the backstepping design strategy, an adaptive fuzzy backstepping position tracking control scheme is proposed for multi-robot manipulator systems. By utilizing the properties of internal forces, extra terms are also added to the control signals to consider the force tracking problem. Moreover, it is shown that the proposed adaptive fuzzy backstepping position/force control approach ensures all the signals of the closed loop system uniformly ultimately bounded and tracking errors of both positions and forces can converge to small desired values by proper selection of the design parameters. Finally, the theoretic achievements are tested on the two three-link planar robot manipulators cooperatively handling a common object to illustrate the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
1996-2007 Interannual Spatio-Temporal Variability in Snowmelt in Two Montane Watersheds
NASA Astrophysics Data System (ADS)
Jepsen, S. M.; Molotch, N. P.; Williams, M. W.; Rittger, K. E.; Sickman, J. O.
2010-12-01
Snowmelt is a primary water resource for urban/agricultural centers and ecosystems near mountain regions. Stream chemistry from montane catchments is controlled by the flowpaths of water from snowmelt and the timing and duration of snow coverage. A process level understanding of the variability in these processes requires an understanding of the effect of changing climate and anthropogenic loading on spatio-temporal snowmelt patterns. With this as our objective, we applied a snow reconstruction model (SRM) to two well-studied montane watersheds, Tokopah Basin (TOK), California and Green Lake 4 Valley (GLV), Colorado, to examine interannual variability in the timing and location of snowmelt in response to variable climate conditions during the period from 1996 to 2007. The reconstruction model back solves for snowmelt by combining surface energy fluxes, inferred from meteorological data, with sequences of melt season snow images derived from satellite data (i.e., snowmelt depletion curves). The SRM explained 84% of the observed interannual variability in maximum watershed SWE in TOK, with errors ranging from -23 to +27% for the different years. For GLV4, the SRM explained 61% of the interannual variability, with errors ranging from -37 to +34%. In GLV4, interannual variability in snowmelt timing is a factor of four greater than the variability in streamflow timing, unlike in TOK where the ratio is nearly 1:1. We attribute this difference primarily to differences in the magnitude of the turbulent fluxes and the hydrogeology of the two study areas.
The impact of 14nm photomask variability and uncertainty on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-09-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.
NASA Astrophysics Data System (ADS)
Kunii, M.; Ito, K.; Wada, A.
2015-12-01
An ensemble Kalman filter (EnKF) using a regional mesoscale atmosphere-ocean coupled model was developed to represent the uncertainties of sea surface temperature (SST) in ensemble data assimilation strategies. The system was evaluated through data assimilation cycle experiments over a one-month period from July to August 2014, during which a tropical cyclone as well as severe rainfall events occurred. The results showed that the data assimilation cycle with the coupled model could reproduce SST distributions realistically even without updating SST and salinity during the data assimilation cycle. Therefore, atmospheric variables and radiation applied as a forcing to ocean models can control oceanic variables to some extent in the current data assimilation configuration. However, investigations of the forecast error covariance estimated in EnKF revealed that the correlation between atmospheric and oceanic variables could possibly lead to less flow-dependent error covariance for atmospheric variables owing to the difference in the time scales between atmospheric and oceanic variables. A verification of the analyses showed positive impacts of applying the ocean model to EnKF on precipitation forecasts. The use of EnKF with the coupled model system captured intensity changes of a tropical cyclone better than it did with an uncoupled atmosphere model, even though the impact on the track forecast was negligibly small.
Total ozone trend significance from space time variability of daily Dobson data
NASA Technical Reports Server (NTRS)
Wilcox, R. W.
1981-01-01
Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.
A modern control theory based algorithm for control of the NASA/JPL 70-meter antenna axis servos
NASA Technical Reports Server (NTRS)
Hill, R. E.
1987-01-01
A digital computer-based state variable controller was designed and applied to the 70-m antenna axis servos. The general equations and structure of the algorithm and provisions for alternate position error feedback modes to accommodate intertarget slew, encoder referenced tracking, and precision tracking modes are descibed. Development of the discrete time domain control model and computation of estimator and control gain parameters based on closed loop pole placement criteria are discussed. The new algorithm was successfully implemented and tested in the 70-m antenna at Deep Space Network station 63 in Spain.
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
Measurement variability error for estimates of volume change
James A. Westfall; Paul L. Patterson
2007-01-01
Using quality assurance data, measurement variability distributions were developed for attributes that affect tree volume prediction. Random deviations from the measurement variability distributions were applied to 19381 remeasured sample trees in Maine. The additional error due to measurement variation and measurement bias was estimated via a simulation study for...
Co-optimization of lithographic and patterning processes for improved EPE performance
NASA Astrophysics Data System (ADS)
Maslow, Mark J.; Timoshkov, Vadim; Kiers, Ton; Jee, Tae Kwon; de Loijer, Peter; Morikita, Shinya; Demand, Marc; Metz, Andrew W.; Okada, Soichiro; Kumar, Kaushik A.; Biesemans, Serge; Yaegashi, Hidetami; Di Lorenzo, Paolo; Bekaert, Joost P.; Mao, Ming; Beral, Christophe; Larivière, Stephane
2017-03-01
Complimentary lithography is already being used for advanced logic patterns. The tight pitches for 1D Metal layers are expected to be created using spacer based multiple patterning ArF-i exposures and the more complex cut/block patterns are made using EUV exposures. At the same time, control requirements of CDU, pattern shift and pitch-walk are approaching sub-nanometer levels to meet edge placement error (EPE) requirements. Local variability, such as Line Edge Roughness (LER), Local CDU, and Local Placement Error (LPE), are dominant factors in the total Edge Placement error budget. In the lithography process, improving the imaging contrast when printing the core pattern has been shown to improve the local variability. In the etch process, it has been shown that the fusion of atomic level etching and deposition can also improve these local variations. Co-optimization of lithography and etch processing is expected to further improve the performance over individual optimizations alone. To meet the scaling requirements and keep process complexity to a minimum, EUV is increasingly seen as the platform for delivering the exposures for both the grating and the cut/block patterns beyond N7. In this work, we evaluated the overlay and pattern fidelity of an EUV block printed in a negative tone resist on an ArF-i SAQP grating. High-order Overlay modeling and corrections during the exposure can reduce overlay error after development, a significant component of the total EPE. During etch, additional degrees of freedom are available to improve the pattern placement error in single layer processes. Process control of advanced pitch nanoscale-multi-patterning techniques as described above is exceedingly complicated in a high volume manufacturing environment. Incorporating potential patterning optimizations into both design and HVM controls for the lithography process is expected to bring a combined benefit over individual optimizations. In this work we will show the EPE performance improvement for a 32nm pitch SAQP + block patterned Metal 2 layer by cooptimizing the lithography and etch processes. Recommendations for further improvements and alternative processes will be given.
Greenland ice sheet albedo variability and feedback: 2000-2015
NASA Astrophysics Data System (ADS)
Box, J. E.; van As, D.; Fausto, R. S.; Mottram, R.; Langen, P. P.; Steffen, K.
2015-12-01
Absorbed solar irradiance represents the dominant source of surface melt energy for Greenland ice. Surface melting has increased as part of a positive feedback amplifier due to surface darkening. The 16 most recent summers of observations from the NASA MODIS sensor indicate a darkening exceeding 6% in July when most melting occurs. Without the darkening, the increase in surface melting would be roughly half as large. A minority of the albedo decline signal may be from sensor degradation. So, in this study, MOD10A1 and MCD43 albedo products from MODIS are evaluated for sensor degradation and anisotropic reflectance errors. Errors are minimized through calibration to GC-Net and PROMICE Greenland snow and ice ground control data. The seasonal and spatial variability in Greenland snow and ice albedo over a 16 year period is presented, including quantifying changing absorbed solar irradiance and melt enhancement due to albedo feedback using the DMI HIRHAM5 5 km model.
Reproducibility of 3D kinematics and surface electromyography measurements of mastication.
Remijn, Lianne; Groen, Brenda E; Speyer, Renée; van Limbeek, Jacques; Nijhuis-van der Sanden, Maria W G
2016-03-01
The aim of this study was to determine the measurement reproducibility for a procedure evaluating the mastication process and to estimate the smallest detectable differences of 3D kinematic and surface electromyography (sEMG) variables. Kinematics of mandible movements and sEMG activity of the masticatory muscles were obtained over two sessions with four conditions: two food textures (biscuit and bread) of two sizes (small and large). Twelve healthy adults (mean age 29.1 years) completed the study. The second to the fifth chewing cycle of 5 bites were used for analyses. The reproducibility per outcome variable was calculated with an intraclass correlation coefficient (ICC) and a Bland-Altman analysis was applied to determine the standard error of measurement relative error of measurement and smallest detectable differences of all variables. ICCs ranged from 0.71 to 0.98 for all outcome variables. The outcome variables consisted of four bite and fourteen chewing cycle variables. The relative standard error of measurement of the bite variables was up to 17.3% for 'time-to-swallow', 'time-to-transport' and 'number of chewing cycles', but ranged from 31.5% to 57.0% for 'change of chewing side'. The relative standard error of measurement ranged from 4.1% to 24.7% for chewing cycle variables and was smaller for kinematic variables than sEMG variables. In general, measurements obtained with 3D kinematics and sEMG are reproducible techniques to assess the mastication process. The duration of the chewing cycle and frequency of chewing were the best reproducible measurements. Change of chewing side could not be reproduced. The published measurement error and smallest detectable differences will aid the interpretation of the results of future clinical studies using the same study variables. Copyright © 2015 Elsevier Inc. All rights reserved.
Chow, John W; Stokic, Dobrivoje S
2018-03-01
We examined changes in variability, accuracy, frequency composition, and temporal regularity of force signal from vision-guided to memory-guided force-matching tasks in 17 subacute stroke and 17 age-matched healthy subjects. Subjects performed a unilateral isometric knee extension at 10, 30, and 50% of peak torque [maximum voluntary contraction (MVC)] for 10 s (3 trials each). Visual feedback was removed at the 5-s mark in the first two trials (feedback withdrawal), and 30 s after the second trial the subjects were asked to produce the target force without visual feedback (force recall). The coefficient of variation and constant error were used to quantify force variability and accuracy. Force structure was assessed by the median frequency, relative spectral power in the 0-3-Hz band, and sample entropy of the force signal. At 10% MVC, the force signal in subacute stroke subjects became steadier, more broadband, and temporally more irregular after the withdrawal of visual feedback, with progressively larger error at higher contraction levels. Also, the lack of modulation in the spectral frequency at higher force levels with visual feedback persisted in both the withdrawal and recall conditions. In terms of changes from the visual feedback condition, the feedback withdrawal produced a greater difference between the paretic, nonparetic, and control legs than the force recall. The overall results suggest improvements in force variability and structure from vision- to memory-guided force control in subacute stroke despite decreased accuracy. Different sensory-motor memory retrieval mechanisms seem to be involved in the feedback withdrawal and force recall conditions, which deserves further study. NEW & NOTEWORTHY We demonstrate that in the subacute phase of stroke, force signals during a low-level isometric knee extension become steadier, more broadband in spectral power, and more complex after removal of visual feedback. Larger force errors are produced when recalling target forces than immediately after withdrawing visual feedback. Although visual feedback offers better accuracy, it worsens force variability and structure in subacute stroke. The feedback withdrawal and force recall conditions seem to involve different memory retrieval mechanisms.
Partially Key Distribution with Public Key Cryptosystem Based on Error Control Codes
NASA Astrophysics Data System (ADS)
Tavallaei, Saeed Ebadi; Falahati, Abolfazl
Due to the low level of security in public key cryptosystems based on number theory, fundamental difficulties such as "key escrow" in Public Key Infrastructure (PKI) and a secure channel in ID-based cryptography, a new key distribution cryptosystem based on Error Control Codes (ECC) is proposed . This idea is done by some modification on McEliece cryptosystem. The security of ECC cryptosystem obtains from the NP-Completeness of block codes decoding. The capability of generating public keys with variable lengths which is suitable for different applications will be provided by using ECC. It seems that usage of these cryptosystems because of decreasing in the security of cryptosystems based on number theory and increasing the lengths of their keys would be unavoidable in future.
The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center
NASA Astrophysics Data System (ADS)
Tseng, Pai-Chung; Chen, Shen-Len
The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2006-01-01
Structural equation models are widely appreciated in social-psychological research and other behavioral research to model relations between latent constructs and manifest variables and to control for measurement error. Most applications of SEMs are based on fully observed continuous normal data and models with a linear structural equation.…
Using the Graded Response Model to Control Spurious Interactions in Moderated Multiple Regression
ERIC Educational Resources Information Center
Morse, Brendan J.; Johanson, George A.; Griffeth, Rodger W.
2012-01-01
Recent simulation research has demonstrated that using simple raw score to operationalize a latent construct can result in inflated Type I error rates for the interaction term of a moderated statistical model when the interaction (or lack thereof) is proposed at the latent variable level. Rescaling the scores using an appropriate item response…
ERIC Educational Resources Information Center
Cho, Sun-Joo; Preacher, Kristopher J.
2016-01-01
Multilevel modeling (MLM) is frequently used to detect cluster-level group differences in cluster randomized trial and observational studies. Group differences on the outcomes (posttest scores) are detected by controlling for the covariate (pretest scores) as a proxy variable for unobserved factors that predict future attributes. The pretest and…
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
James, Mike R.; Robson, Stuart; d'Oleire-Oltmanns, Sebastian; Niethammer, Uwe
2016-04-01
Structure-from-motion (SfM) algorithms are greatly facilitating the production of detailed topographic models based on images collected by unmanned aerial vehicles (UAVs). However, SfM-based software does not generally provide the rigorous photogrammetric analysis required to fully understand survey quality. Consequently, error related to problems in control point data or the distribution of control points can remain undiscovered. Even if these errors are not large in magnitude, they can be systematic, and thus have strong implications for the use of products such as digital elevation models (DEMs) and orthophotos. Here, we develop a Monte Carlo approach to (1) improve the accuracy of products when SfM-based processing is used and (2) reduce the associated field effort by identifying suitable lower density deployments of ground control points. The method highlights over-parameterisation during camera self-calibration and provides enhanced insight into control point performance when rigorous error metrics are not available. Processing was implemented using commonly-used SfM-based software (Agisoft PhotoScan), which we augment with semi-automated and automated GCPs image measurement. We apply the Monte Carlo method to two contrasting case studies - an erosion gully survey (Taurodont, Morocco) carried out with an fixed-wing UAV, and an active landslide survey (Super-Sauze, France), acquired using a manually controlled quadcopter. The results highlight the differences in the control requirements for the two sites, and we explore the implications for future surveys. We illustrate DEM sensitivity to critical processing parameters and show how the use of appropriate parameter values increases DEM repeatability and reduces the spatial variability of error due to processing artefacts.
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.
2017-01-01
SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018
SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.
NASA Astrophysics Data System (ADS)
Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.
2013-10-01
In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.
Hammer, Eva M.; Kaufmann, Tobias; Kleih, Sonja C.; Blankertz, Benjamin; Kübler, Andrea
2014-01-01
Modulation of sensorimotor rhythms (SMR) was suggested as a control signal for brain-computer interfaces (BCI). Yet, there is a population of users estimated between 10 to 50% not able to achieve reliable control and only about 20% of users achieve high (80–100%) performance. Predicting performance prior to BCI use would facilitate selection of the most feasible system for an individual, thus constitute a practical benefit for the user, and increase our knowledge about the correlates of BCI control. In a recent study, we predicted SMR-BCI performance from psychological variables that were assessed prior to the BCI sessions and BCI control was supported with machine-learning techniques. We described two significant psychological predictors, namely the visuo-motor coordination ability and the ability to concentrate on the task. The purpose of the current study was to replicate these results thereby validating these predictors within a neurofeedback based SMR-BCI that involved no machine learning.Thirty-three healthy BCI novices participated in a calibration session and three further neurofeedback training sessions. Two variables were related with mean SMR-BCI performance: (1) a measure for the accuracy of fine motor skills, i.e., a trade for a person’s visuo-motor control ability; and (2) subject’s “attentional impulsivity”. In a linear regression they accounted for almost 20% in variance of SMR-BCI performance, but predictor (1) failed significance. Nevertheless, on the basis of our prior regression model for sensorimotor control ability we could predict current SMR-BCI performance with an average prediction error of M = 12.07%. In more than 50% of the participants, the prediction error was smaller than 10%. Hence, psychological variables played a moderate role in predicting SMR-BCI performance in a neurofeedback approach that involved no machine learning. Future studies are needed to further consolidate (or reject) the present predictors. PMID:25147518
ERIC Educational Resources Information Center
Cole, Russell; Haimson, Joshua; Perez-Johnson, Irma; May, Henry
2011-01-01
State assessments are increasingly used as outcome measures for education evaluations. The scaling of state assessments produces variability in measurement error, with the conditional standard error of measurement increasing as average student ability moves toward the tails of the achievement distribution. This report examines the variability in…
Anticipatory synergy adjustments reflect individual performance of feedforward force control.
Togo, Shunta; Imamizu, Hiroshi
2016-10-06
We grasp and dexterously manipulate an object through multi-digit synergy. In the framework of the uncontrolled manifold (UCM) hypothesis, multi-digit synergy is defined as the coordinated control mechanism of fingers to stabilize variable important for task success, e.g., total force. Previous studies reported anticipatory synergy adjustments (ASAs) that correspond to a drop of the synergy index before a quick change of the total force. The present study compared ASA's properties with individual performances of feedforward force control to investigate a relationship of those. Subjects performed a total finger force production task that consisted of a phase in which subjects tracked target line with visual information and a phase in which subjects produced total force pulse without visual information. We quantified their multi-digit synergy through UCM analysis and observed significant ASAs before producing total force pulse. The time of the ASA initiation and the magnitude of the drop of the synergy index were significantly correlated with the error of force pulse, but not with the tracking error. Almost all subjects showed a significant increase of the variance that affected the total force. Our study directly showed that ASA reflects the individual performance of feedforward force control independently of target-tracking performance and suggests that the multi-digit synergy was weakened to adjust the multi-digit movements based on a prediction error so as to reduce the future error. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan
2016-11-15
Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R 2 = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope, while instream nitrate retention was positively correlated with nitrate concentration. By quantifying spatial and temporal variability in sources and sinks, the DPM provides new information to better target management actions to the most effective times and places. Given the wide use of ECMs as research and management tools, our approach can be broadly applied in other watersheds and to other materials. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nibali, Maria L; Tombleson, Tom; Brady, Philip H; Wagner, Phillip
2015-10-01
Understanding typical variation of vertical jump (VJ) performance and confounding sources of its typical variability (i.e., familiarization and competitive level) is pertinent in the routine monitoring of athletes. We evaluated the presence of systematic error (learning effect) and nonuniformity of error (heteroscedasticity) across VJ performances of athletes that differ in competitive level and quantified the reliability of VJ kinetic and kinematic variables relative to the smallest worthwhile change (SWC). One hundred thirteen high school athletes, 30 college athletes, and 35 professional athletes completed repeat VJ trials. Average eccentric rate of force development (RFD), average concentric (CON) force, CON impulse, and jump height measurements were obtained from vertical ground reaction force (VGRF) data. Systematic error was assessed by evaluating changes in the mean of repeat trials. Heteroscedasticity was evaluated by plotting the difference score (trial 2 - trial 1) against the mean of the trials. Variability of jump variables was calculated as the typical error (TE) and coefficient of variation (%CV). No substantial systematic error (effect size range: -0.07 to 0.11) or heteroscedasticity was present for any of the VJ variables. Vertical jump can be performed without the need for familiarization trials, and the variability can be conveyed as either the raw TE or the %CV. Assessment of VGRF variables is an effective and reliable means of assessing VJ performance. Average CON force and CON impulse are highly reliable (%CV: 2.7% ×/÷ 1.10), although jump height was the only variable to display a %CV ≤SWC. Eccentric RFD is highly variable yet should not be discounted from VJ assessments on this factor alone because it may be sensitive to changes in response to training or fatigue that exceed the TE.
Ribic, C.A.; Miller, T.W.
1998-01-01
We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.
Performance evaluation of the microINR® point-of-care INR-testing system.
Joubert, J; van Zyl, M C; Raubenheimer, J
2018-04-01
Point-of-care International Normalised Ratio (INR) testing is used frequently. We evaluated the microINR ® POC system for accuracy, precision and measurement repeatability, and investigated instrument and test chip variability and error rates. Venous blood INRs of 210 patients on warfarin were obtained with Thromborel ® S on the Sysmex CS-2100i ® analyser and compared with capillary blood microINR ® values. Precision was assessed using control materials. Measurement repeatability was calculated on 51 duplicate finger-prick INRs. Triplicate finger-prick INRs using three different instruments (30 patients) and three different test chip lots (29 patients) were used to evaluate instrument and test chip variability. Linear regression analysis of microINR ® and Sysmex CS2100i ® values showed a correlation coefficient of 0.96 (P < .0001) and a positive proportional bias of 4.4%. Dosage concordance was 93.8% and clinical agreement 95.7%. All acceptance criteria based on ISO standard 17593:2007 system accuracy requirements were met. Control material coefficients of variation (CV) varied from 6.2% to 16.7%. The capillary blood measurement repeatability CV was 7.5%. No significant instrument (P = .93) or test chip (P = .81) variability was found, and the error rate was low (2.8%). The microINR ® instrument is accurate and precise for monitoring warfarin therapy. © 2017 John Wiley & Sons Ltd.
Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D
2018-05-18
Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.
The Relationship Between Work Commitment, Dynamic, and Medication Error.
Rezaiamin, Abdoolkarim; Pazokian, Marzieh; Zagheri Tafreshi, Mansoureh; Nasiri, Malihe
2017-05-01
Incidence of medication errors in intensive care unit (ICU) can cause irreparable damage for ICU patients. Therefore, it seems necessary to find the causes of medication errors in this section. Work commitment and dynamic might affect the incidence of medication errors in ICU. To assess the mentioned hypothesis, we performed a descriptive-analytical study which was carried out on 117 nurses working in ICU of educational hospitals in Tehran. Minick et al., Salyer et al., and Wakefield et al. scales were used for data gathering on work commitment, dynamic, and medication errors, respectively. Findings of the current study revealed that high work commitment in ICU nurses caused low number of medication errors, including intravenous and nonintravenous. We controlled the effects of confounding variables in detection of this relationship. In contrast, no significant association was found between work dynamic and different types of medication errors. Although the study did not observe any relationship between the dynamics and rate of medication errors, the training of nurses or nursing students to create a dynamic environment in hospitals can increase their interest in the profession and increase job satisfaction in them. Also they must have enough ability in work dynamic so that they don't confused and distracted result in frequent changes of orders, care plans, and procedures.
A variable acceleration calibration system
NASA Astrophysics Data System (ADS)
Johnson, Thomas H.
2011-12-01
A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.
Maaoui-Ben Hassine, Ikram; Naouar, Mohamed Wissem; Mrabet-Bellaaj, Najiba
2016-05-01
In this paper, Model Predictive Control and Dead-beat predictive control strategies are proposed for the control of a PMSG based wind energy system. The proposed MPC considers the model of the converter-based system to forecast the possible future behavior of the controlled variables. It allows selecting the voltage vector to be applied that leads to a minimum error by minimizing a predefined cost function. The main features of the MPC are low current THD and robustness against parameters variations. The Dead-beat predictive control is based on the system model to compute the optimum voltage vector that ensures zero-steady state error. The optimum voltage vector is then applied through Space Vector Modulation (SVM) technique. The main advantages of the Dead-beat predictive control are low current THD and constant switching frequency. The proposed control techniques are presented and detailed for the control of back-to-back converter in a wind turbine system based on PMSG. Simulation results (under Matlab-Simulink software environment tool) and experimental results (under developed prototyping platform) are presented in order to show the performances of the considered control strategies. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Marquardt, Lynn; Eichele, Heike; Lundervold, Astri J.; Haavik, Jan; Eichele, Tom
2018-01-01
Introduction: Attention-deficit hyperactivity disorder (ADHD) is one of the most frequent neurodevelopmental disorders in children and tends to persist into adulthood. Evidence from neuropsychological, neuroimaging, and electrophysiological studies indicates that alterations of error processing are core symptoms in children and adolescents with ADHD. To test whether adults with ADHD show persisting deficits and compensatory processes, we investigated performance monitoring during stimulus-evaluation and response-selection, with a focus on errors, as well as within-group correlations with symptom scores. Methods: Fifty-five participants (27 ADHD and 28 controls) aged 19–55 years performed a modified flanker task during EEG recording with 64 electrodes, and the ADHD and control groups were compared on measures of behavioral task performance, event-related potentials of performance monitoring (N2, P3), and error processing (ERN, Pe). Adult ADHD Self-Report Scale (ASRS) was used to assess ADHD symptom load. Results: Adults with ADHD showed higher error rates in incompatible trials, and these error rates correlated positively with the ASRS scores. Also, we observed lower P3 amplitudes in incompatible trials, which were inversely correlated with symptom load in the ADHD group. Adults with ADHD also displayed reduced error-related ERN and Pe amplitudes. There were no significant differences in reaction time (RT) and RT variability between the two groups. Conclusion: Our findings show deviations of electrophysiological measures, suggesting reduced effortful engagement of attentional and error-monitoring processes in adults with ADHD. Associations between ADHD symptom scores, event-related potential amplitudes, and poorer task performance in the ADHD group further support this notion. PMID:29706908
Huang, Chien-Ting; Hwang, Ing-Shiou
2012-01-01
Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498
Measurement uncertainty evaluation of conicity error inspected on CMM
NASA Astrophysics Data System (ADS)
Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang
2016-01-01
The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Measurement effects of seasonal and monthly variability on pedometer-determined data.
Kang, Minsoo; Bassett, David R; Barreira, Tiago V; Tudor-Locke, Catrine; Ainsworth, Barbara E
2012-03-01
The seasonal and monthly variability of pedometer-determined physical activity and its effects on accurate measurement have not been examined. The purpose of the study was to reduce measurement error in step-count data by controlling a) the length of the measurement period and b) the season or month of the year in which sampling was conducted. Twenty-three middle-aged adults were instructed to wear a Yamax SW-200 pedometer over 365 consecutive days. The step-count measurement periods of various lengths (eg, 2, 3, 4, 5, 6, 7 days, etc.) were randomly selected 10 times for each season and month. To determine accurate estimates of yearly step-count measurement, mean absolute percentage error (MAPE) and bias were calculated. The year-round average was considered as a criterion measure. A smaller MAPE and bias represent a better estimate. Differences in MAPE and bias among seasons were trivial; however, they varied among different months. The months in which seasonal changes occur presented the highest MAPE and bias. Targeting the data collection during certain months (eg, May) may reduce pedometer measurement error and provide more accurate estimates of year-round averages.
Perceived Cost and Intrinsic Motor Variability Modulate the Speed-Accuracy Trade-Off
Bertucco, Matteo; Bhanpuri, Nasir H.; Sanger, Terence D.
2015-01-01
Fitts’ Law describes the speed-accuracy trade-off of human movements, and it is an elegant strategy that compensates for random and uncontrollable noise in the motor system. The control strategy during targeted movements may also take into account the rewards or costs of any outcomes that may occur. The aim of this study was to test the hypothesis that movement time in Fitts’ Law emerges not only from the accuracy constraints of the task, but also depends on the perceived cost of error for missing the targets. Subjects were asked to touch targets on an iPad® screen with different costs for missed targets. We manipulated the probability of error by comparing children with dystonia (who are characterized by increased intrinsic motor variability) to typically developing children. The results show a strong effect of the cost of error on the Fitts’ Law relationship characterized by an increase in movement time as cost increased. In addition, we observed a greater sensitivity to increased cost for children with dystonia, and this behavior appears to minimize the average cost. The findings support a proposed mathematical model that explains how movement time in a Fitts-like task is related to perceived risk. PMID:26447874
Are your covariates under control? How normalization can re-introduce covariate effects.
Pain, Oliver; Dudbridge, Frank; Ronald, Angelica
2018-04-30
Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.
DSMC Simulation and Experimental Validation of Shock Interaction in Hypersonic Low Density Flow
2014-01-01
Direct simulation Monte Carlo (DSMC) of shock interaction in hypersonic low density flow is developed. Three collision molecular models, including hard sphere (HS), variable hard sphere (VHS), and variable soft sphere (VSS), are employed in the DSMC study. The simulations of double-cone and Edney's type IV hypersonic shock interactions in low density flow are performed. Comparisons between DSMC and experimental data are conducted. Investigation of the double-cone hypersonic flow shows that three collision molecular models can predict the trend of pressure coefficient and the Stanton number. HS model shows the best agreement between DSMC simulation and experiment among three collision molecular models. Also, it shows that the agreement between DSMC and experiment is generally good for HS and VHS models in Edney's type IV shock interaction. However, it fails in the VSS model. Both double-cone and Edney's type IV shock interaction simulations show that the DSMC errors depend on the Knudsen number and the models employed for intermolecular interaction. With the increase in the Knudsen number, the DSMC error is decreased. The error is the smallest in HS compared with those in the VHS and VSS models. When the Knudsen number is in the level of 10−4, the DSMC errors, for pressure coefficient, the Stanton number, and the scale of interaction region, are controlled within 10%. PMID:24672360
Endpoint Accuracy in Manual Control of a Steerable Needle.
van de Berg, Nick J; Dankelman, Jenny; van den Dobbelsteen, John J
2017-02-01
To study the ability of a human operator to manually correct for errors in the needle insertion path without partial withdrawal of the needle by means of an active, tip-articulated steerable needle. The needle is composed of a 1.32-mm outer-diameter cannula, with a flexure joint near the tip, and a retractable stylet. The bending stiffness of the needle resembles that of a 20-gauge hypodermic needle. The needle functionality was evaluated in manual insertions by steering to predefined targets and a lateral displacement of 20 mm from the straight insertion line. Steering tasks were conducted in 5 directions and 2 tissue simulants under image guidance from a camera. The repeatability in instrument actuations was assessed during 100 mm deep automated insertions with a linear motor. In addition to tip position, tip angles were tracked during the insertions. The targeting error (mean absolute error ± standard deviation) during manual steering to 5 different targets in stiff tissue was 0.5 mm ± 1.1. This variability in manual tip placement (1.1 mm) was less than the variability among automated insertions (1.4 mm) in the same tissue type. An increased tissue stiffness resulted in an increased lateral tip displacement. The tip angle was directly controlled by the user interface, and remained unaffected by the tissue stiffness. This study demonstrates the ability to manually steer needles to predefined target locations under image guidance. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pack, Robert C.; Standiford, Keith; Lukanc, Todd; Ning, Guo Xiang; Verma, Piyush; Batarseh, Fadi; Chua, Gek Soon; Fujimura, Akira; Pang, Linyong
2014-10-01
A methodology is described wherein a calibrated model-based `Virtual' Variable Shaped Beam (VSB) mask writer process simulator is used to accurately verify complex Optical Proximity Correction (OPC) and Inverse Lithography Technology (ILT) mask designs prior to Mask Data Preparation (MDP) and mask fabrication. This type of verification addresses physical effects which occur in mask writing that may impact lithographic printing fidelity and variability. The work described here is motivated by requirements for extreme accuracy and control of variations for today's most demanding IC products. These extreme demands necessitate careful and detailed analysis of all potential sources of uncompensated error or variation and extreme control of these at each stage of the integrated OPC/ MDP/ Mask/ silicon lithography flow. The important potential sources of variation we focus on here originate on the basis of VSB mask writer physics and other errors inherent in the mask writing process. The deposited electron beam dose distribution may be examined in a manner similar to optical lithography aerial image analysis and image edge log-slope analysis. This approach enables one to catch, grade, and mitigate problems early and thus reduce the likelihood for costly long-loop iterations between OPC, MDP, and wafer fabrication flows. It moreover describes how to detect regions of a layout or mask where hotspots may occur or where the robustness to intrinsic variations may be improved by modification to the OPC, choice of mask technology, or by judicious design of VSB shots and dose assignment.
Ronald E. McRoberts; Veronica C. Lessard
2001-01-01
Uncertainty in diameter growth predictions is attributed to three general sources: measurement error or sampling variability in predictor variables, parameter covariances, and residual or unexplained variation around model expectations. Using measurement error and sampling variability distributions obtained from the literature and Monte Carlo simulation methods, the...
NASA Technical Reports Server (NTRS)
Hamer, H. A.; Johnson, K. G.; Young, J. W.
1985-01-01
An analysis is performed to compare decoupled and linear quadratic regulator (LQR) procedures for the control of a large, flexible space antenna. Control objectives involve: (1) commanding changes in the rigid-body modes, (2) nulling initial disturbances in the rigid-body modes, or (3) nulling initial disturbances in the first three flexible modes. Control is achieved with two three-axis control-moment gyros located on the antenna column. Results are presented to illustrate various effects on control requirements for the two procedures. These effects include errors in the initial estimates of state variables, variations in the type, number, and location of sensors, and deletions of state-variable estimates for certain flexible modes after control activation. The advantages of incorporating a time lag in the control feedback are also illustrated. In addition, the effects of inoperative-control situations are analyzed with regard to control requirements and resultant modal responses. Comparisons are included which show the effects of perfect state feedback with no residual modes (ideal case). Time-history responses are presented to illustrate the various effects on the control procedures.
Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank
2016-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957
Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank
2017-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Multiple indicators, multiple causes measurement error models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Trajectory Tracking of a Planer Parallel Manipulator by Using Computed Force Control Method
NASA Astrophysics Data System (ADS)
Bayram, Atilla
2017-03-01
Despite small workspace, parallel manipulators have some advantages over their serial counterparts in terms of higher speed, acceleration, rigidity, accuracy, manufacturing cost and payload. Accordingly, this type of manipulators can be used in many applications such as in high-speed machine tools, tuning machine for feeding, sensitive cutting, assembly and packaging. This paper presents a special type of planar parallel manipulator with three degrees of freedom. It is constructed as a variable geometry truss generally known planar Stewart platform. The reachable and orientation workspaces are obtained for this manipulator. The inverse kinematic analysis is solved for the trajectory tracking according to the redundancy and joint limit avoidance. Then, the dynamics model of the manipulator is established by using Virtual Work method. The simulations are performed to follow the given planar trajectories by using the dynamic equations of the variable geometry truss manipulator and computed force control method. In computed force control method, the feedback gain matrices for PD control are tuned with fixed matrices by trail end error and variable ones by means of optimization with genetic algorithm.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Schiffinger, Michael; Latzke, Markus; Steyrer, Johannes
2016-01-01
Safety climate (SC) and more recently patient engagement (PE) have been identified as potential determinants of patient safety, but conceptual and empirical studies combining both are lacking. On the basis of extant theories and concepts in safety research, this study investigates the effect of PE in conjunction with SC on perceived error occurrence (pEO) in hospitals, controlling for various staff-, patient-, and hospital-related variables as well as the amount of stress and (lack of) organizational support experienced by staff. Besides the main effects of PE and SC on error occurrence, their interaction is examined, too. In 66 hospital units, 4,345 patients assessed the degree of PE, and 811 staff assessed SC and pEO. PE was measured with a new instrument, capturing its core elements according to a recent literature review: Information Provision (both active and passive) and Activation and Collaboration. SC and pEO were measured with validated German-language questionnaires. Besides standard regression and correlational analyses, partial least squares analysis was employed to model the main and interaction effects of PE and SC on pEO, also controlling for stress and (lack of) support perceived by staff, various staff and patient attributes, and potential single-source bias. Both PE and SC are associated with lower pEO, to a similar extent. The joint effect of these predictors suggests a substitution rather than mutually reinforcing interaction. Accounting for control variables and/or potential single-source bias slightly attenuates some effects without altering the results. Ignoring PE potentially amounts to forgoing a potential source of additional safety. On the other hand, despite the abovementioned substitution effect and conjectures of SC being inert, PE should not be considered as a replacement for SC.
NASA Technical Reports Server (NTRS)
Bayless, E. O.; Lawless, K. G.; Kurgan, C.; Nunes, A. C.; Graham, B. F.; Hoffman, D.; Jones, C. S.; Shepard, R.
1993-01-01
Fully automated variable-polarity plasma arc VPPA welding system developed at Marshall Space Flight Center. System eliminates defects caused by human error. Integrates many sensors with mathematical model of the weld and computer-controlled welding equipment. Sensors provide real-time information on geometry of weld bead, location of weld joint, and wire-feed entry. Mathematical model relates geometry of weld to critical parameters of welding process.
ERIC Educational Resources Information Center
Nugent, William Robert; Moore, Matthew; Story, Erin
2015-01-01
The standardized mean difference (SMD) is perhaps the most important meta-analytic effect size. It is typically used to represent the difference between treatment and control population means in treatment efficacy research. It is also used to represent differences between populations with different characteristics, such as persons who are…
Injecting Artificial Memory Errors Into a Running Computer Program
NASA Technical Reports Server (NTRS)
Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.
2008-01-01
Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.
Simulation evaluation of TIMER, a time-based, terminal air traffic, flow-management concept
NASA Technical Reports Server (NTRS)
Credeur, Leonard; Capron, William R.
1989-01-01
A description of a time-based, extended terminal area ATC concept called Traffic Intelligence for the Management of Efficient Runway scheduling (TIMER) and the results of a fast-time evaluation are presented. The TIMER concept is intended to bridge the gap between today's ATC system and a future automated time-based ATC system. The TIMER concept integrates en route metering, fuel-efficient cruise and profile descents, terminal time-based sequencing and spacing together with computer-generated controller aids, to improve delivery precision for fuller use of runway capacity. Simulation results identify and show the effects and interactions of such key variables as horizon of control location, delivery time error at both the metering fix and runway threshold, aircraft separation requirements, delay discounting, wind, aircraft heading and speed errors, and knowledge of final approach speed.
NASA Technical Reports Server (NTRS)
Alag, Gurbux S.; Gilyard, Glenn B.
1990-01-01
To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.
NASA Astrophysics Data System (ADS)
Ransom, K.; Nolan, B. T.; Faunt, C. C.; Bell, A.; Gronberg, J.; Traum, J.; Wheeler, D. C.; Rosecrans, C.; Belitz, K.; Eberts, S.; Harter, T.
2016-12-01
A hybrid, non-linear, machine learning statistical model was developed within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface in the Central Valley, California. A database of 213 predictor variables representing well characteristics, historical and current field and county scale nitrogen mass balance, historical and current landuse, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6,000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The machine learning method, gradient boosting machine (GBM) was used to screen predictor variables and rank them in order of importance in relation to the groundwater nitrate measurements. The top five most important predictor variables included oxidation/reduction characteristics, historical field scale nitrogen mass balance, climate, and depth to 60 year old water. Twenty-two variables were selected for the final model and final model errors for log-transformed hold-out data were R squared of 0.45 and root mean square error (RMSE) of 1.124. Modeled mean groundwater age was tested separately for error improvement in the model and when included decreased model RMSE by 0.5% compared to the same model without age and by 0.20% compared to the model with all 213 variables. 1D and 2D partial plots were examined to determine how variables behave individually and interact in the model. Some variables behaved as expected: log nitrate decreased with increasing probability of anoxic conditions and depth to 60 year old water, generally decreased with increasing natural landuse surrounding wells and increasing mean groundwater age, generally increased with increased minimum depth to high water table and with increased base flow index value. Other variables exhibited much more erratic or noisy behavior in the model making them more difficult to interpret but highlighting the usefulness of the non-linear machine learning method. 2D interaction plots show probability of anoxic groundwater conditions largely control estimated nitrate concentrations compared to the other predictors.
NASA Astrophysics Data System (ADS)
Pohl, Benjamin; Douville, Hervé
2011-10-01
The CNRM atmospheric general circulation model Arpege-Climat is relaxed towards atmospheric reanalyses outside the 10°S-32°N 30°W-50°E domain in order to disentangle the regional versus large-scale sources of climatological biases and interannual variability of the West African monsoon (WAM). On the one hand, the main climatological features of the monsoon, including the spatial distribution of summer precipitation, are only weakly improved by the nudging, thereby suggesting the regional origin of the Arpege-Climat biases. On the other hand, the nudging technique is relatively efficient to control the interannual variability of the WAM dynamics, though the impact on rainfall variability is less clear. Additional sensitivity experiments focusing on the strong 1994 summer monsoon suggest that the weak sensitivity of the model biases is not an artifact of the nudging design, but the evidence that regional physical processes are the main limiting factors for a realistic simulation of monsoon circulation and precipitation in the Arpege-Climat model. Sensitivity experiments to soil moisture boundary conditions are also conducted and highlight the relevance of land-atmosphere coupling for the amplification of precipitation biases. Nevertheless, the land surface hydrology is not the main explanation for the model errors that are rather due to deficiencies in the atmospheric physics. The intraseasonal timescale and the model internal variability are discussed in a companion paper.
Heterogeneity of spontaneous DNA replication errors in single isogenic Escherichia coli cells
2018-01-01
Despite extensive knowledge of the molecular mechanisms that control mutagenesis, it is not known how spontaneous mutations are produced in cells with fully operative mutation-prevention systems. By using a mutation assay that allows visualization of DNA replication errors and stress response transcriptional reporters, we examined populations of isogenic Escherichia coli cells growing under optimal conditions without exogenous stress. We found that spontaneous DNA replication errors in proliferating cells arose more frequently in subpopulations experiencing endogenous stresses, such as problems with proteostasis, genome maintenance, and reactive oxidative species production. The presence of these subpopulations of phenotypic mutators is not expected to affect the average mutation frequency or to reduce the mean population fitness in a stable environment. However, these subpopulations can contribute to overall population adaptability in fluctuating environments by serving as a reservoir of increased genetic variability.
NASA Astrophysics Data System (ADS)
Zakeri, Zeinab; Azadi, Majid; Ghader, Sarmad
2018-01-01
Satellite radiances and in-situ observations are assimilated through Weather Research and Forecasting Data Assimilation (WRFDA) system into Advanced Research WRF (ARW) model over Iran and its neighboring area. Domain specific background error based on x and y components of wind speed (UV) control variables is calculated for WRFDA system and some sensitivity experiments are carried out to compare the impact of global background error and the domain specific background errors, both on the precipitation and 2-m temperature forecasts over Iran. Three precipitation events that occurred over the country during January, September and October 2014 are simulated in three different experiments and the results for precipitation and 2-m temperature are verified against the verifying surface observations. Results show that using domain specific background error improves 2-m temperature and 24-h accumulated precipitation forecasts consistently, while global background error may even degrade the forecasts compared to the experiments without data assimilation. The improvement in 2-m temperature is more evident during the first forecast hours and decreases significantly as the forecast length increases.
Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data
Zhao, Shanshan
2014-01-01
Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
Glofcheskie, Grace O; Brown, Stephen H M
2017-04-01
Trunk motor control is essential for athletic performance, and inadequate trunk motor control has been linked to an increased risk of developing low back and lower limb injury in athletes. Research is limited in comparing relationships between trunk neuromuscular control, postural control, and trunk proprioception in athletes from different sporting backgrounds. To test for these relationships, collegiate level long distance runners and golfers, along with non-athletic controls were recruited. Trunk postural control was investigated using a seated balance task. Neuromuscular control in response to sudden trunk loading perturbations was measured using electromyography and kinematics. Proprioceptive ability was examined using active trunk repositioning tasks. Both athlete groups demonstrated greater trunk postural control (less centre of pressure movement) during the seated task compared to controls. Athletes further demonstrated faster trunk muscle activation onsets, higher muscle activation amplitudes, and less lumbar spine angular displacement in response to sudden trunk loading perturbations when compared to controls. Golfers demonstrated less absolute error and variable error in trunk repositioning tasks compared to both runners and controls, suggestive of greater proprioceptive ability. This suggests an interactive relationship between neuromuscular control, postural control, and proprioception in athletes, and that differences exist between athletes of various training backgrounds. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich
2011-01-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
Spatial patterns of throughfall isotopic composition at the event and seasonal timescales
NASA Astrophysics Data System (ADS)
Allen, Scott T.; Keim, Richard F.; McDonnell, Jeffrey J.
2015-03-01
Spatial variability of throughfall isotopic composition in forests is indicative of complex processes occurring in the canopy and remains insufficiently understood to properly characterize precipitation inputs to the catchment water balance. Here we investigate variability of throughfall isotopic composition with the objectives: (1) to quantify the spatial variability in event-scale samples, (2) to determine if there are persistent controls over the variability and how these affect variability of seasonally accumulated throughfall, and (3) to analyze the distribution of measured throughfall isotopic composition associated with varying sampling regimes. We measured throughfall over two, three-month periods in western Oregon, USA under a Douglas-fir canopy. The mean spatial range of δ18O for each event was 1.6‰ and 1.2‰ through Fall 2009 (11 events) and Spring 2010 (7 events), respectively. However, the spatial pattern of isotopic composition was not temporally stable causing season-total throughfall to be less variable than event throughfall (1.0‰; range of cumulative δ18O for Fall 2009). Isotopic composition was not spatially autocorrelated and not explained by location relative to tree stems. Sampling error analysis for both field measurements and Monte-Carlo simulated datasets representing different sampling schemes revealed the standard deviation of differences from the true mean as high as 0.45‰ (δ18O) and 1.29‰ (d-excess). The magnitude of this isotopic variation suggests that small sample sizes are a source of substantial experimental error.
The CESM Large Ensemble Project: Inspiring New Ideas and Understanding
NASA Astrophysics Data System (ADS)
Kay, J. E.; Deser, C.
2016-12-01
While internal climate variability is known to affect climate projections, its influence is often underappreciated and confused with model error. Why? In general, modeling centers contribute a small number of realizations to international climate model assessments [e.g., phase 5 of the Coupled Model Intercomparison Project (CMIP5)]. As a result, model error and internal climate variability are difficult, and at times impossible, to disentangle. In response, the Community Earth System Model (CESM) community designed the CESM Large Ensemble (CESM-LE) with the explicit goal of enabling assessment of climate change in the presence of internal climate variability. All CESM-LE simulations use a single CMIP5 model (CESM with the Community Atmosphere Model, version 5). The core simulations replay the twenty to twenty-first century (1920-2100) 40+ times under historical and representative concentration pathway 8.5 external forcing with small initial condition differences. Two companion 2000+-yr-long preindustrial control simulations (fully coupled, prognostic atmosphere and land only) allow assessment of internal climate variability in the absence of climate change. Comprehensive outputs, including many daily fields, are available as single-variable time series on the Earth System Grid for anyone to use. Examples of scientists and stakeholders that are using the CESM-LE outputs to help interpret the observational record, to understand projection spread and to plan for a range of possible futures influenced by both internal climate variability and forced climate change will be highlighted the presentation.
Eaton, Catherine Torrington
2015-11-01
This article explores the theoretical and empirical relationships between cognitive factors and residual speech errors (RSEs). Definitions of relevant cognitive domains are provided, as well as examples of formal and informal tasks that may be appropriate in assessment. Although studies to date have been limited in number and scope, basic research suggests that cognitive flexibility, short- and long-term memory, and self-monitoring may be areas of weakness in this population. Preliminary evidence has not supported a relationship between inhibitory control, attention, and RSEs; however, further studies that control variables such as language ability and temperament are warranted. Previous translational research has examined the effects of self-monitoring training on residual speech errors. Although results have been mixed, some findings suggest that children with RSEs may benefit from the inclusion of this training. The article closes with a discussion of clinical frameworks that target cognitive skills, including self-monitoring and attention, as a means of facilitating speech sound change. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Cognitive Load Differentially Impacts Response Control in Girls and Boys with ADHD
Mostofsky, Stewart H.; Rosch, Keri S.
2015-01-01
Children with attention-deficit hyperactivity disorder (ADHD) consistently show impaired response control, including deficits in response inhibition and increased intrasubject variability (ISV) compared to typically-developing (TD) children. However, significantly less research has examined factors that may influence response control in individuals with ADHD, such as task or participant characteristics. The current study extends the literature by examining the impact of increasing cognitive demands on response control in a large sample of 81children with ADHD (40 girls) and 100 TD children (47 girls), ages 8–12 years. Participants completed a simple Go/No-Go (GNG) task with minimal cognitive demands, and a complex GNG task with increased cognitive load. Results showed that increasing cognitive load differentially impacted response control (commission error rate and tau, an ex-Gaussian measure of ISV) for girls, but not boys, with ADHD compared to same-sex TD children. Specifically, a sexually dimorphic pattern emerged such that boys with ADHD demonstrated higher commission error rate and tau on both the simple and complex GNG tasks as compared to TD boys, whereas girls with ADHD did not differ from TD girls on the simple GNG task, but showed higher commission error rate and tau on the complex GNG task. These findings suggest that task complexity influences response control in children with ADHD in a sexually dimorphic manner. The findings have substantive implications for the pathophysiology of ADHD in boys versus girls with ADHD. PMID:25624066
A Robust Parameterization of Human Gait Patterns Across Phase-Shifting Perturbations
Villarreal, Dario J.; Poonawala, Hasan A.; Gregg, Robert D.
2016-01-01
The phase of human gait is difficult to quantify accurately in the presence of disturbances. In contrast, recent bipedal robots use time-independent controllers relying on a mechanical phase variable to synchronize joint patterns through the gait cycle. This concept has inspired studies to determine if human joint patterns can also be parameterized by a mechanical variable. Although many phase variable candidates have been proposed, it remains unclear which, if any, provide a robust representation of phase for human gait analysis or control. In this paper we analytically derive an ideal phase variable (the hip phase angle) that is provably monotonic and bounded throughout the gait cycle. To examine the robustness of this phase variable, ten able-bodied human subjects walked over a platform that randomly applied phase-shifting perturbations to the stance leg. A statistical analysis found the correlations between nominal and perturbed joint trajectories to be significantly greater when parameterized by the hip phase angle (0.95+) than by time or a different phase variable. The hip phase angle also best parameterized the transient errors about the nominal periodic orbit. Finally, interlimb phasing was best explained by local (ipsilateral) hip phase angles that are synchronized during the double-support period. PMID:27187967
Improving Interference Control in ADHD Patients with Transcranial Direct Current Stimulation (tDCS)
Breitling, Carolin; Zaehle, Tino; Dannhauer, Moritz; Bonath, Björn; Tegelbeckers, Jana; Flechtner, Hans-Henning; Krauel, Kerstin
2016-01-01
The use of transcranial direct current stimulation (tDCS) in patients with attention deficit hyperactivity disorder (ADHD) has been suggested as a promising alternative to psychopharmacological treatment approaches due to its local and network effects on brain activation. In the current study, we investigated the impact of tDCS over the right inferior frontal gyrus (rIFG) on interference control in 21 male adolescents with ADHD and 21 age matched healthy controls aged 13–17 years, who underwent three separate sessions of tDCS (anodal, cathodal, and sham) while completing a Flanker task. Even though anodal stimulation appeared to diminish commission errors in the ADHD group, the overall analysis revealed no significant effect of tDCS. Since participants showed a considerable learning effect from the first to the second session, performance in the first session was separately analyzed. ADHD patients receiving sham stimulation in the first session showed impaired interference control compared to healthy control participants whereas ADHD patients who were exposed to anodal stimulation, showed comparable performance levels (commission errors, reaction time variability) to the control group. These results suggest that anodal tDCS of the right inferior frontal gyrus could improve interference control in patients with ADHD. PMID:27147964
Improving Interference Control in ADHD Patients with Transcranial Direct Current Stimulation (tDCS).
Breitling, Carolin; Zaehle, Tino; Dannhauer, Moritz; Bonath, Björn; Tegelbeckers, Jana; Flechtner, Hans-Henning; Krauel, Kerstin
2016-01-01
The use of transcranial direct current stimulation (tDCS) in patients with attention deficit hyperactivity disorder (ADHD) has been suggested as a promising alternative to psychopharmacological treatment approaches due to its local and network effects on brain activation. In the current study, we investigated the impact of tDCS over the right inferior frontal gyrus (rIFG) on interference control in 21 male adolescents with ADHD and 21 age matched healthy controls aged 13-17 years, who underwent three separate sessions of tDCS (anodal, cathodal, and sham) while completing a Flanker task. Even though anodal stimulation appeared to diminish commission errors in the ADHD group, the overall analysis revealed no significant effect of tDCS. Since participants showed a considerable learning effect from the first to the second session, performance in the first session was separately analyzed. ADHD patients receiving sham stimulation in the first session showed impaired interference control compared to healthy control participants whereas ADHD patients who were exposed to anodal stimulation, showed comparable performance levels (commission errors, reaction time variability) to the control group. These results suggest that anodal tDCS of the right inferior frontal gyrus could improve interference control in patients with ADHD.
EKF-Based Enhanced Performance Controller Design for Nonlinear Stochastic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yuyang; Zhang, Qichun; Wang, Hong
In this paper, a novel control algorithm is presented to enhance the performance of tracking property for a class of non-linear dynamic stochastic systems with unmeasurable variables. To minimize the entropy of tracking errors without changing the existing closed loop with PI controller, the enhanced performance loop is constructed based on the state estimation by extended Kalman Filter and the new controller is designed by full state feedback following this presented control algorithm. Besides, the conditions are obtained for the stability analysis in the mean square sense. In the end, the comparative simulation results are given to illustrate the effectivenessmore » of proposed control algorithm.« less
NASA Astrophysics Data System (ADS)
Xia, Huanxiong; Xiang, Dong; Yang, Wang; Mou, Peng
2014-12-01
Low-temperature plasma technique is one of the critical techniques in IC manufacturing process, such as etching and thin-film deposition, and the uniformity greatly impacts the process quality, so the design for the plasma uniformity control is very important but difficult. It is hard to finely and flexibly regulate the spatial distribution of the plasma in the chamber via controlling the discharge parameters or modifying the structure in zero-dimensional space, and it just can adjust the overall level of the process factors. In the view of this problem, a segmented non-uniform dielectric module design solution is proposed for the regulation of the plasma profile in a CCP chamber. The solution achieves refined and flexible regulation of the plasma profile in the radial direction via configuring the relative permittivity and the width of each segment. In order to solve this design problem, a novel simulation-based auto-design approach is proposed, which can automatically design the positional sequence with multi independent variables to make the output target profile in the parameterized simulation model approximate the one that users preset. This approach employs an idea of quasi-closed-loop control system, and works in an iterative mode. It starts from initial values of the design variable sequences, and predicts better sequences via the feedback of the profile error between the output target profile and the expected one. It never stops until the profile error is narrowed in the preset tolerance.
Development of kinesthetic-motor and auditory-motor representations in school-aged children.
Kagerer, Florian A; Clark, Jane E
2015-07-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age.
Development of kinesthetic-motor and auditory-motor representations in school-aged children
Clark, Jane E.
2015-01-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age. PMID:25912609
Relationships of Measurement Error and Prediction Error in Observed-Score Regression
ERIC Educational Resources Information Center
Moses, Tim
2012-01-01
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
Limitations of the paraxial Debye approximation.
Sheppard, Colin J R
2013-04-01
In the paraxial form of the Debye integral for focusing, higher order defocus terms are ignored, which can result in errors in dealing with aberrations, even for low numerical aperture. These errors can be avoided by using a different integration variable. The aberrations of a glass slab, such as a coverslip, are expanded in terms of the new variable, and expressed in terms of Zernike polynomials to assist with aberration balancing. Tube length error is also discussed.
NASA Technical Reports Server (NTRS)
Sadoff, Melvin
1958-01-01
The results of a fixed-base simulator study of the effects of variable longitudinal control-system dynamics on pilot opinion are presented and compared with flight-test data. The control-system variables considered in this investigation included stick force per g, time constant, and dead-band, or stabilizer breakout force. In general, the fairly good correlation between flight and simulator results for two pilots demonstrates the validity of fixed-base simulator studies which are designed to complement and supplement flight studies and serve as a guide in control-system preliminary design. However, in the investigation of certain problem areas (e.g., sensitive control-system configurations associated with pilot- induced oscillations in flight), fixed-base simulator results did not predict the occurrence of an instability, although the pilots noted the system was extremely sensitive and unsatisfactory. If it is desired to predict pilot-induced-oscillation tendencies, tests in moving-base simulators may be required. It was found possible to represent the human pilot by a linear pilot analog for the tracking task assumed in the present study. The criterion used to adjust the pilot analog was the root-mean-square tracking error of one of the human pilots on the fixed-base simulator. Matching the tracking error of the pilot analog to that of the human pilot gave an approximation to the variation of human-pilot behavior over a range of control-system dynamics. Results of the pilot-analog study indicated that both for optimized control-system dynamics (for poor airplane dynamics) and for a region of good airplane dynamics, the pilot response characteristics are approximately the same.
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
Helle, Samuli
2018-03-01
Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.
Digital Photon Correlation Data Processing Techniques
1976-07-01
velocimeter signals. During the conduct of the contract a complementary theoretical effort with the NASA Langley Research Center was in progress ( NASI -13140...6.3.2 Variability Error In an earlier very brief contract with NASA Langley ( NASI -13140) a simplified variability error analysis was performed
Factors Controlling Sediment Load in The Central Anatolia Region of Turkey: Ankara River Basin.
Duru, Umit; Wohl, Ellen; Ahmadi, Mehdi
2017-05-01
Better understanding of the factors controlling sediment load at a catchment scale can facilitate estimation of soil erosion and sediment transport rates. The research summarized here enhances understanding of correlations between potential control variables on suspended sediment loads. The Soil and Water Assessment Tool was used to simulate flow and sediment at the Ankara River basin. Multivariable regression analysis and principal component analysis were then performed between sediment load and controlling variables. The physical variables were either directly derived from a Digital Elevation Model or from field maps or computed using established equations. Mean observed sediment rate is 6697 ton/year and mean sediment yield is 21 ton/y/km² from the gage. Soil and Water Assessment Tool satisfactorily simulated observed sediment load with Nash-Sutcliffe efficiency, relative error, and coefficient of determination (R²) values of 0.81, -1.55, and 0.93, respectively in the catchment. Therefore, parameter values from the physically based model were applied to the multivariable regression analysis as well as principal component analysis. The results indicate that stream flow, drainage area, and channel width explain most of the variability in sediment load among the catchments. The implications of the results, efficient siltation management practices in the catchment should be performed to stream flow, drainage area, and channel width.
Factors Controlling Sediment Load in The Central Anatolia Region of Turkey: Ankara River Basin
NASA Astrophysics Data System (ADS)
Duru, Umit; Wohl, Ellen; Ahmadi, Mehdi
2017-05-01
Better understanding of the factors controlling sediment load at a catchment scale can facilitate estimation of soil erosion and sediment transport rates. The research summarized here enhances understanding of correlations between potential control variables on suspended sediment loads. The Soil and Water Assessment Tool was used to simulate flow and sediment at the Ankara River basin. Multivariable regression analysis and principal component analysis were then performed between sediment load and controlling variables. The physical variables were either directly derived from a Digital Elevation Model or from field maps or computed using established equations. Mean observed sediment rate is 6697 ton/year and mean sediment yield is 21 ton/y/km² from the gage. Soil and Water Assessment Tool satisfactorily simulated observed sediment load with Nash-Sutcliffe efficiency, relative error, and coefficient of determination ( R²) values of 0.81, -1.55, and 0.93, respectively in the catchment. Therefore, parameter values from the physically based model were applied to the multivariable regression analysis as well as principal component analysis. The results indicate that stream flow, drainage area, and channel width explain most of the variability in sediment load among the catchments. The implications of the results, efficient siltation management practices in the catchment should be performed to stream flow, drainage area, and channel width.
Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function
NASA Astrophysics Data System (ADS)
Seo, Sang-Wha; Kim, Yong; Choi, Han Ho
2017-11-01
This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.
Decentralized control experiments on NASA's flexible grid
NASA Technical Reports Server (NTRS)
Ozguner, U.; Yurkowich, S.; Martin, J., III; Al-Abbass, F.
1986-01-01
Methods arising from the area of decentralized control are emerging for analysis and control synthesis for large flexible structures. In this paper the control strategy involves a decentralized model reference adaptive approach using a variable structure control. Local models are formulated based on desired damping and response time in a model-following scheme for various modal configurations. Variable structure controllers are then designed employing co-located angular rate and position feedback. In this scheme local control forces the system to move on a local sliding mode in some local error space. An important feature of this approach is that the local subsystem is made insensitive to dynamical interactions with other subsystems once the sliding surface is reached. Experiments based on the above have been performed for NASA's flexible grid experimental apparatus. The grid is designed to admit appreciable low-frequency structural dynamics, and allows for implementation of distributed computing components, inertial sensors, and actuation devices. A finite-element analysis of the grid provides the model for control system design and simulation; results of several simulations are reported on here, and a discussion of application experiments on the apparatus is presented.
System for detecting operating errors in a variable valve timing engine using pressure sensors
Wiles, Matthew A.; Marriot, Craig D
2013-07-02
A method and control module includes a pressure sensor data comparison module that compares measured pressure volume signal segments to ideal pressure volume segments. A valve actuation hardware remedy module performs a hardware remedy in response to comparing the measured pressure volume signal segments to the ideal pressure volume segments when a valve actuation hardware failure is detected.
Anisotropic mesh adaptation for marine ice-sheet modelling
NASA Astrophysics Data System (ADS)
Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier
2017-04-01
Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh refinement. For transient solutions where the GL is moving, we have implemented an algorithm where the computation is reiterated allowing to anticipate the GL displacement and to adapt the mesh to the transient solution. We discuss the performance and robustness of this algorithm.
Kitayama, Shinobu
2014-01-01
The fundamentally social nature of humans is revealed in their exquisitely high sensitivity to potentially negative evaluations held by others. At present, however, little is known about neurocortical correlates of the response to such social-evaluative threat. Here, we addressed this issue by showing that mere exposure to an image of a watching face is sufficient to automatically evoke a social-evaluative threat for those who are relatively high in interdependent self-construal. Both European American and Asian participants performed a flanker task while primed with a face (vs control) image. The relative increase of the error-related negativity (ERN) in the face (vs control) priming condition became more pronounced as a function of interdependent (vs independent) self-construal. Relative to European Americans, Asians were more interdependent and, as predicted, they showed a reliably stronger ERN in the face (vs control) priming condition. Our findings suggest that the ERN can serve as a robust empirical marker of self-threat that is closely modulated by socio-cultural variables. PMID:23160814
Nievas-Cazorla, Francisco; Soriano-Ferrer, Manuel; Sánchez-López, Pilar
2016-01-01
The aim of this study was to compare the reaction times and errors of Spanish children with developmental dyslexia to the reaction times and errors of readers without dyslexia on a masked lexical decision task with identity or repetition priming. A priming paradigm was used to study the role of the lexical deficit in dyslexic children, manipulating the frequency and length of the words, with a short Stimulus Onset Asynchrony (SOA = 150 ms) and degraded stimuli. The sample consisted of 80 participants from 9 to 14 years old, divided equally into a group with a developmental dyslexia diagnosis and a control group without dyslexia. Results show that identity priming is higher in control children (133 ms) than in dyslexic children (55 ms). Thus, the "frequency" and "word length" variables are not the source or origin of this reduction in identity priming reaction times in children with developmental dyslexia compared to control children.
Causal inference with measurement error in outcomes: Bias analysis and estimation methods.
Shu, Di; Yi, Grace Y
2017-01-01
Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.
MRI-guided prostate focal laser ablation therapy using a mechatronic needle guidance system
NASA Astrophysics Data System (ADS)
Cepek, Jeremy; Lindner, Uri; Ghai, Sangeet; Davidson, Sean R. H.; Trachtenberg, John; Fenster, Aaron
2014-03-01
Focal therapy of localized prostate cancer is receiving increased attention due to its potential for providing effective cancer control in select patients with minimal treatment-related side effects. Magnetic resonance imaging (MRI)-guided focal laser ablation (FLA) therapy is an attractive modality for such an approach. In FLA therapy, accurate placement of laser fibers is critical to ensuring that the full target volume is ablated. In practice, error in needle placement is invariably present due to pre- to intra-procedure image registration error, needle deflection, prostate motion, and variability in interventionalist skill. In addition, some of these sources of error are difficult to control, since the available workspace and patient positions are restricted within a clinical MRI bore. In an attempt to take full advantage of the utility of intraprocedure MRI, while minimizing error in needle placement, we developed an MRI-compatible mechatronic system for guiding needles to the prostate for FLA therapy. The system has been used to place interstitial catheters for MRI-guided FLA therapy in eight subjects in an ongoing Phase I/II clinical trial. Data from these cases has provided quantification of the level of uncertainty in needle placement error. To relate needle placement error to clinical outcome, we developed a model for predicting the probability of achieving complete focal target ablation for a family of parameterized treatment plans. Results from this work have enabled the specification of evidence-based selection criteria for the maximum target size that can be confidently ablated using this technique, and quantify the benefit that may be gained with improvements in needle placement accuracy.
Altered motor control patterns in whiplash and chronic neck pain.
Woodhouse, Astrid; Vasseljen, Ottar
2008-06-20
Persistent whiplash associated disorders (WAD) have been associated with alterations in kinesthetic sense and motor control. The evidence is however inconclusive, particularly for differences between WAD patients and patients with chronic non-traumatic neck pain. The aim of this study was to investigate motor control deficits in WAD compared to chronic non-traumatic neck pain and healthy controls in relation to cervical range of motion (ROM), conjunct motion, joint position error and ROM-variability. Participants (n = 173) were recruited to three groups: 59 patients with persistent WAD, 57 patients with chronic non-traumatic neck pain and 57 asymptomatic volunteers. A 3D motion tracking system (Fastrak) was used to record maximal range of motion in the three cardinal planes of the cervical spine (sagittal, frontal and horizontal), and concurrent motion in the two associated cardinal planes relative to each primary plane were used to express conjunct motion. Joint position error was registered as the difference in head positions before and after cervical rotations. Reduced conjunct motion was found for WAD and chronic neck pain patients compared to asymptomatic subjects. This was most evident during cervical rotation. Reduced conjunct motion was not explained by current pain or by range of motion in the primary plane. Total conjunct motion during primary rotation was 13.9 degrees (95% CI; 12.2-15.6) for the WAD group, 17.9 degrees (95% CI; 16.1-19.6) for the chronic neck pain group and 25.9 degrees (95% CI; 23.7-28.1) for the asymptomatic group. As expected, maximal cervical range of motion was significantly reduced among the WAD patients compared to both control groups. No group differences were found in maximal ROM-variability or joint position error. Altered movement patterns in the cervical spine were found for both pain groups, indicating changes in motor control strategies. The changes were not related to a history of neck trauma, nor to current pain, but more likely due to long-lasting pain. No group differences were found for kinaesthetic sense.
Altered motor control patterns in whiplash and chronic neck pain
Woodhouse, Astrid; Vasseljen, Ottar
2008-01-01
Background Persistent whiplash associated disorders (WAD) have been associated with alterations in kinesthetic sense and motor control. The evidence is however inconclusive, particularly for differences between WAD patients and patients with chronic non-traumatic neck pain. The aim of this study was to investigate motor control deficits in WAD compared to chronic non-traumatic neck pain and healthy controls in relation to cervical range of motion (ROM), conjunct motion, joint position error and ROM-variability. Methods Participants (n = 173) were recruited to three groups: 59 patients with persistent WAD, 57 patients with chronic non-traumatic neck pain and 57 asymptomatic volunteers. A 3D motion tracking system (Fastrak) was used to record maximal range of motion in the three cardinal planes of the cervical spine (sagittal, frontal and horizontal), and concurrent motion in the two associated cardinal planes relative to each primary plane were used to express conjunct motion. Joint position error was registered as the difference in head positions before and after cervical rotations. Results Reduced conjunct motion was found for WAD and chronic neck pain patients compared to asymptomatic subjects. This was most evident during cervical rotation. Reduced conjunct motion was not explained by current pain or by range of motion in the primary plane. Total conjunct motion during primary rotation was 13.9° (95% CI; 12.2–15.6) for the WAD group, 17.9° (95% CI; 16.1–19.6) for the chronic neck pain group and 25.9° (95% CI; 23.7–28.1) for the asymptomatic group. As expected, maximal cervical range of motion was significantly reduced among the WAD patients compared to both control groups. No group differences were found in maximal ROM-variability or joint position error. Conclusion Altered movement patterns in the cervical spine were found for both pain groups, indicating changes in motor control strategies. The changes were not related to a history of neck trauma, nor to current pain, but more likely due to long-lasting pain. No group differences were found for kinaesthetic sense. PMID:18570647
Gender nonconformity, intelligence, and sexual orientation.
Rahman, Qazi; Bhanot, Suraj; Emrith-Small, Hanna; Ghafoor, Shilan; Roberts, Steven
2012-06-01
The present study explored whether there were relationships among gender nonconformity, intelligence, and sexual orientation. A total of 106 heterosexual men, 115 heterosexual women, and 103 gay men completed measures of demographic variables, recalled childhood gender nonconformity (CGN), and the National Adult Reading Test (NART). NART error scores were used to estimate Wechsler Adult Intelligence Scale (WAIS) Full-Scale IQ (FSIQ) and Verbal IQ (VIQ) scores. Gay men had significantly fewer NART errors than heterosexual men and women (controlling for years of education). In heterosexual men, correlational analysis revealed significant associations between CGN, NART, and FSIQ scores (elevated boyhood femininity correlated with higher IQ scores). In heterosexual women, the direction of the correlations between CGN and all IQ scores was reversed (elevated girlhood femininity correlating with lower IQ scores). There were no significant correlations among these variables in gay men. These data may indicate a "sexuality-specific" effect on general cognitive ability but with limitations. They also support growing evidence that quantitative measures of sex-atypicality are useful in the study of trait sexual orientation.
ANCOVA Versus CHANGE From Baseline in Nonrandomized Studies: The Difference.
van Breukelen, Gerard J P
2013-11-01
The pretest-posttest control group design can be analyzed with the posttest as dependent variable and the pretest as covariate (ANCOVA) or with the difference between posttest and pretest as dependent variable (CHANGE). These 2 methods can give contradictory results if groups differ at pretest, a phenomenon that is known as Lord's paradox. Literature claims that ANCOVA is preferable if treatment assignment is based on randomization or on the pretest and questionable for preexisting groups. Some literature suggests that Lord's paradox has to do with measurement error in the pretest. This article shows two new things: First, the claims are confirmed by proving the mathematical equivalence of ANCOVA to a repeated measures model without group effect at pretest. Second, correction for measurement error in the pretest is shown to lead back to ANCOVA or to CHANGE, depending on the assumed absence or presence of a true group difference at pretest. These two new theoretical results are illustrated with multilevel (mixed) regression and structural equation modeling of data from two studies.
A comparison of exact tests for trend with binary endpoints using Bartholomew's statistic.
Consiglio, J D; Shan, G; Wilding, G E
2014-01-01
Tests for trend are important in a number of scientific fields when trends associated with binary variables are of interest. Implementing the standard Cochran-Armitage trend test requires an arbitrary choice of scores assigned to represent the grouping variable. Bartholomew proposed a test for qualitatively ordered samples using asymptotic critical values, but type I error control can be problematic in finite samples. To our knowledge, use of the exact probability distribution has not been explored, and we study its use in the present paper. Specifically we consider an approach based on conditioning on both sets of marginal totals and three unconditional approaches where only the marginal totals corresponding to the group sample sizes are treated as fixed. While slightly conservative, all four tests are guaranteed to have actual type I error rates below the nominal level. The unconditional tests are found to exhibit far less conservatism than the conditional test and thereby gain a power advantage.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales
NASA Astrophysics Data System (ADS)
Esselborn, Saskia; Rudenko, Sergei; Schöne, Tilo
2018-03-01
Interannual to decadal sea level trends are indicators of climate variability and change. A major source of global and regional sea level data is satellite radar altimetry, which relies on precise knowledge of the satellite's orbit. Here, we assess the error budget of the radial orbit component for the TOPEX/Poseidon mission for the period 1993 to 2004 from a set of different orbit solutions. The errors for seasonal, interannual (5-year), and decadal periods are estimated on global and regional scales based on radial orbit differences from three state-of-the-art orbit solutions provided by different research teams: the German Research Centre for Geosciences (GFZ), the Groupe de Recherche de Géodésie Spatiale (GRGS), and the Goddard Space Flight Center (GSFC). The global mean sea level error related to orbit uncertainties is of the order of 1 mm (8 % of the global mean sea level variability) with negligible contributions on the annual and decadal timescales. In contrast, the orbit-related error of the interannual trend is 0.1 mm yr-1 (27 % of the corresponding sea level variability) and might hamper the estimation of an acceleration of the global mean sea level rise. For regional scales, the gridded orbit-related error is up to 11 mm, and for about half the ocean the orbit error accounts for at least 10 % of the observed sea level variability. The seasonal orbit error amounts to 10 % of the observed seasonal sea level signal in the Southern Ocean. At interannual and decadal timescales, the orbit-related trend uncertainties reach regionally more than 1 mm yr-1. The interannual trend errors account for 10 % of the observed sea level signal in the tropical Atlantic and the south-eastern Pacific. For decadal scales, the orbit-related trend errors are prominent in a several regions including the South Atlantic, western North Atlantic, central Pacific, South Australian Basin, and the Mediterranean Sea. Based on a set of test orbits calculated at GFZ, the sources of the observed orbit-related errors are further investigated. The main contributors on all timescales are uncertainties in Earth's time-variable gravity field models and on annual to interannual timescales discrepancies of the tracking station subnetworks, i.e. satellite laser ranging (SLR) and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS).
Kiloampere, Variable-Temperature, Critical-Current Measurements of High-Field Superconductors
Goodrich, LF; Cheggour, N; Stauffer, TC; Filla, BJ; Lu, XF
2013-01-01
We review variable-temperature, transport critical-current (Ic) measurements made on commercial superconductors over a range of critical currents from less than 0.1 A to about 1 kA. We have developed and used a number of systems to make these measurements over the last 15 years. Two exemplary variable-temperature systems with coil sample geometries will be described: a probe that is only variable-temperature and a probe that is variable-temperature and variable-strain. The most significant challenge for these measurements is temperature stability, since large amounts of heat can be generated by the flow of high current through the resistive sample fixture. Therefore, a significant portion of this review is focused on the reduction of temperature errors to less than ±0.05 K in such measurements. A key feature of our system is a pre-regulator that converts a flow of liquid helium to gas and heats the gas to a temperature close to the target sample temperature. The pre-regulator is not in close proximity to the sample and it is controlled independently of the sample temperature. This allows us to independently control the total cooling power, and thereby fine tune the sample cooling power at any sample temperature. The same general temperature-control philosophy is used in all of our variable-temperature systems, but the addition of another variable, such as strain, forces compromises in design and results in some differences in operation and protocol. These aspects are analyzed to assess the extent to which the protocols for our systems might be generalized to other systems at other laboratories. Our approach to variable-temperature measurements is also placed in the general context of measurement-system design, and the perceived advantages and disadvantages of design choices are presented. To verify the accuracy of the variable-temperature measurements, we compared critical-current values obtained on a specimen immersed in liquid helium (“liquid” or Ic liq) at 5 K to those measured on the same specimen in flowing helium gas (“gas” or Ic gas) at the same temperature. These comparisons indicate the temperature control is effective over the superconducting wire length between the voltage taps, and this condition is valid for all types of sample investigated, including Nb-Ti, Nb3Sn, and MgB2 wires. The liquid/gas comparisons are used to study the variable-temperature measurement protocol that was necessary to obtain the “correct” critical current, which was assumed to be the Ic liq. We also calibrated the magnetoresistance effect of resistive thermometers for temperatures from 4 K to 35 K and magnetic fields from 0 T to 16 T. This calibration reduces systematic errors in the variable-temperature data, but it does not affect the liquid/gas comparison since the same thermometers are used in both cases. PMID:26401435
Impedance modulation and feedback corrections in tracking targets of variable size and frequency.
Selen, Luc P J; van Dieën, Jaap H; Beek, Peter J
2006-11-01
Humans are able to adjust the accuracy of their movements to the demands posed by the task at hand. The variability in task execution caused by the inherent noisiness of the neuromuscular system can be tuned to task demands by both feedforward (e.g., impedance modulation) and feedback mechanisms. In this experiment, we studied both mechanisms, using mechanical perturbations to estimate stiffness and damping as indices of impedance modulation and submovement scaling as an index of feedback driven corrections. Eight subjects tracked three differently sized targets (0.0135, 0.0270, and 0.0405 rad) moving at three different frequencies (0.20, 0.25, and 0.33 Hz). Movement variability decreased with both decreasing target size and movement frequency, whereas stiffness and damping increased with decreasing target size, independent of movement frequency. These results are consistent with the theory that mechanical impedance acts as a filter of noisy neuromuscular signals but challenge stochastic theories of motor control that do not account for impedance modulation and only partially for feedback control. Submovements during unperturbed cycles were quantified in terms of their gain, i.e., the slope between their duration and amplitude in the speed profile. Submovement gain decreased with decreasing movement frequency and increasing target size. The results were interpreted to imply that submovement gain is related to observed tracking errors and that those tracking errors are expressed in units of target size. We conclude that impedance and submovement gain modulation contribute additively to tracking accuracy.
A system dynamics approach to analyze laboratory test errors.
Guo, Shijing; Roudsari, Abdul; Garcez, Artur d'Avila
2015-01-01
Although many researches have been carried out to analyze laboratory test errors during the last decade, it still lacks a systemic view of study, especially to trace errors during test process and evaluate potential interventions. This study implements system dynamics modeling into laboratory errors to trace the laboratory error flows and to simulate the system behaviors while changing internal variable values. The change of the variables may reflect a change in demand or a proposed intervention. A review of literature on laboratory test errors was given and provided as the main data source for the system dynamics model. Three "what if" scenarios were selected for testing the model. System behaviors were observed and compared under different scenarios over a period of time. The results suggest system dynamics modeling has potential effectiveness of helping to understand laboratory errors, observe model behaviours, and provide a risk-free simulation experiments for possible strategies.
File Assignment in a Central Server Computer Network.
1979-01-01
somewhat artificial for many applications. Sometimes important variables must be known in advance when they are more appropriately decision variables... intellegently , we must have some notion of the errors that may be introduced. We must account for two types of er:ors. The first is the error
NASA Technical Reports Server (NTRS)
Simpson, C. A.
1985-01-01
In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.
Logic design for dynamic and interactive recovery.
NASA Technical Reports Server (NTRS)
Carter, W. C.; Jessep, D. C.; Wadia, A. B.; Schneider, P. R.; Bouricius, W. G.
1971-01-01
Recovery in a fault-tolerant computer means the continuation of system operation with data integrity after an error occurs. This paper delineates two parallel concepts embodied in the hardware and software functions required for recovery; detection, diagnosis, and reconfiguration for hardware, data integrity, checkpointing, and restart for the software. The hardware relies on the recovery variable set, checking circuits, and diagnostics, and the software relies on the recovery information set, audit, and reconstruct routines, to characterize the system state and assist in recovery when required. Of particular utility is a handware unit, the recovery control unit, which serves as an interface between error detection and software recovery programs in the supervisor and provides dynamic interactive recovery.
Improved Quality in Aerospace Testing Through the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.
2000-01-01
This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.
Teng, C-C; Chai, H; Lai, D-M; Wang, S-F
2007-02-01
Previous research has shown that there is no significant relationship between the degree of structural degeneration of the cervical spine and neck pain. We therefore sought to investigate the potential role of sensory dysfunction in chronic neck pain. Cervicocephalic kinesthetic sensibility, expressed by how accurately an individual can reposition the head, was studied in three groups of individuals, a control group of 20 asymptomatic young adults and two groups of middle-aged adults (20 subjects in each group) with or without a history of mild neck pain. An ultrasound-based three-dimensional coordinate measuring system was used to measure the position of the head and to test the accuracy of repositioning. Constant error (indicating that the subject overshot or undershot the intended position) and root mean square errors (representing total errors of accuracy and variability) were measured during repositioning of the head to the neutral head position (Head-to-NHP) and repositioning of the head to the target (Head-to-Target) in three cardinal planes (sagittal, transverse, and frontal). Analysis of covariance (ANCOVA) was used to test the group effect, with age used as a covariate. The constant errors during repositioning from a flexed position and from an extended position to the NHP were significantly greater in the middle-aged subjects than in the control group (beta=0.30 and beta=0.60, respectively; P<0.05 for both). In addition, the root mean square errors during repositioning from a flexed or extended position to the NHP were greater in the middle-aged subjects than in the control group (beta=0.27 and beta=0.49, respectively; P<0.05 for both). The root mean square errors also increased during Head-to-Target in left rotation (beta=0.24;P<0.05), but there was no difference in the constant errors or root mean square errors during Head-to-NHP repositioning from other target positions (P>0.05). The results indicate that, after controlling for age as a covariate, there was no group effect. Thus, age appears to have a profound effect on an individual's ability to accurately reposition the head toward the neutral position in the sagittal plane and repositioning the head toward left rotation. A history of mild chronic neck pain alone had no significant effect on cervicocephalic kinesthetic sensibility.
Selvaraj, P; Sakthivel, R; Kwon, O M
2018-06-07
This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.
Emmetropisation and the aetiology of refractive errors
Flitcroft, D I
2014-01-01
The distribution of human refractive errors displays features that are not commonly seen in other biological variables. Compared with the more typical Gaussian distribution, adult refraction within a population typically has a negative skew and increased kurtosis (ie is leptokurtotic). This distribution arises from two apparently conflicting tendencies, first, the existence of a mechanism to control eye growth during infancy so as to bring refraction towards emmetropia/low hyperopia (ie emmetropisation) and second, the tendency of many human populations to develop myopia during later childhood and into adulthood. The distribution of refraction therefore changes significantly with age. Analysis of the processes involved in shaping refractive development allows for the creation of a life course model of refractive development. Monte Carlo simulations based on such a model can recreate the variation of refractive distributions seen from birth to adulthood and the impact of increasing myopia prevalence on refractive error distributions in Asia. PMID:24406411
Observation of non-classical correlations in sequential measurements of photon polarization
NASA Astrophysics Data System (ADS)
Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.
2016-10-01
A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.
Keefer, Patricia; Kidwell, Kelley; Lengyel, Candice; Warrier, Kavita; Wagner, Deborah
2017-01-01
Voluntary medication error reporting is an imperfect resource used to improve the quality of medication administration. It requires judgment by front-line staff to determine how to report enough to identify opportunities to improve patients' safety but not jeopardize that safety by creating a culture of "report fatigue." This study aims to provide information on interpretability of medication error and the variability between the subgroups of caregivers in the hospital setting. Survey participants included nursing, physician (trainee and graduated), patient/families, pharmacist across a large academic health system, including an attached free-standing pediatric hospital. Demographics and survey questions were collected and analyzed using Fischer's exact testing with SAS v9.3. Statistically significant variability existed between the four groups for a majority of the questions. This included all cases designated as administration errors and many, but not all, cases of prescribing events. Commentary provided in the free-text portion of the survey was sub-analyzed and found to be associated with medication allergy reporting and lack of education surrounding report characteristics. There is significant variability in the threshold to report specific medication errors in the hospital setting. More work needs to be done to further improve the education surrounding error reporting in hospitals for all noted subgroups. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Intraindividual variability in inhibitory function in adults with ADHD--an ex-Gaussian approach.
Gmehlin, Dennis; Fuermaier, Anselm B M; Walther, Stephan; Debelak, Rudolf; Rentrop, Mirjam; Westermann, Celina; Sharma, Anuradha; Tucha, Lara; Koerts, Janneke; Tucha, Oliver; Weisbrod, Matthias; Aschenbrenner, Steffen
2014-01-01
Attention deficit disorder (ADHD) is commonly associated with inhibitory dysfunction contributing to typical behavioral symptoms like impulsivity or hyperactivity. However, some studies analyzing intraindividual variability (IIV) of reaction times in children with ADHD (cADHD) question a predominance of inhibitory deficits. IIV is a measure of the stability of information processing and provides evidence that longer reaction times (RT) in inhibitory tasks in cADHD are due to only a few prolonged responses which may indicate deficits in sustained attention rather than inhibitory dysfunction. We wanted to find out, whether a slowing in inhibitory functioning in adults with ADHD (aADHD) is due to isolated slow responses. Computing classical RT measures (mean RT, SD), ex-Gaussian parameters of IIV (which allow a better separation of reaction time (mu), variability (sigma) and abnormally slow responses (tau) than classical measures) as well as errors of omission and commission, we examined response inhibition in a well-established GoNogo task in a sample of aADHD subjects without medication and healthy controls matched for age, gender and education. We did not find higher numbers of commission errors in aADHD, while the number of omissions was significantly increased compared with controls. In contrast to increased mean RT, the distributional parameter mu did not document a significant slowing in aADHD. However, subjects with aADHD were characterized by increased IIV throughout the entire RT distribution as indicated by the parameters sigma and tau as well as the SD of reaction time. Moreover, we found a significant correlation between tau and the number of omission errors. Our findings question a primacy of inhibitory deficits in aADHD and provide evidence for attentional dysfunction. The present findings may have theoretical implications for etiological models of ADHD as well as more practical implications for neuropsychological testing in aADHD.
Intraindividual Variability in Inhibitory Function in Adults with ADHD – An Ex-Gaussian Approach
Gmehlin, Dennis; Fuermaier, Anselm B. M.; Walther, Stephan; Debelak, Rudolf; Rentrop, Mirjam; Westermann, Celina; Sharma, Anuradha; Tucha, Lara; Koerts, Janneke; Tucha, Oliver; Weisbrod, Matthias; Aschenbrenner, Steffen
2014-01-01
Objective Attention deficit disorder (ADHD) is commonly associated with inhibitory dysfunction contributing to typical behavioral symptoms like impulsivity or hyperactivity. However, some studies analyzing intraindividual variability (IIV) of reaction times in children with ADHD (cADHD) question a predominance of inhibitory deficits. IIV is a measure of the stability of information processing and provides evidence that longer reaction times (RT) in inhibitory tasks in cADHD are due to only a few prolonged responses which may indicate deficits in sustained attention rather than inhibitory dysfunction. We wanted to find out, whether a slowing in inhibitory functioning in adults with ADHD (aADHD) is due to isolated slow responses. Methods Computing classical RT measures (mean RT, SD), ex-Gaussian parameters of IIV (which allow a better separation of reaction time (mu), variability (sigma) and abnormally slow responses (tau) than classical measures) as well as errors of omission and commission, we examined response inhibition in a well-established GoNogo task in a sample of aADHD subjects without medication and healthy controls matched for age, gender and education. Results We did not find higher numbers of commission errors in aADHD, while the number of omissions was significantly increased compared with controls. In contrast to increased mean RT, the distributional parameter mu did not document a significant slowing in aADHD. However, subjects with aADHD were characterized by increased IIV throughout the entire RT distribution as indicated by the parameters sigma and tau as well as the SD of reaction time. Moreover, we found a significant correlation between tau and the number of omission errors. Conclusions Our findings question a primacy of inhibitory deficits in aADHD and provide evidence for attentional dysfunction. The present findings may have theoretical implications for etiological models of ADHD as well as more practical implications for neuropsychological testing in aADHD. PMID:25479234
USDA-ARS?s Scientific Manuscript database
The internal consistency of the Test of Variables of Attention (TOVA) was examined in a cohort of 6- to 12-year-old children (N = 63) strictly diagnosed with ADHD. The internal consistency of errors of omission (OMM), errors of commission (COM), response time (RT), and response time variability (RTV...
ECG compression using non-recursive wavelet transform with quality control
NASA Astrophysics Data System (ADS)
Liu, Je-Hung; Hung, King-Chu; Wu, Tsung-Ching
2016-09-01
While wavelet-based electrocardiogram (ECG) data compression using scalar quantisation (SQ) yields excellent compression performance, a wavelet's SQ scheme, however, must select a set of multilevel quantisers for each quantisation process. As a result of the properties of multiple-to-one mapping, however, this scheme is not conducive for reconstruction error control. In order to address this problem, this paper presents a single-variable control SQ scheme able to guarantee the reconstruction quality of wavelet-based ECG data compression. Based on the reversible round-off non-recursive discrete periodised wavelet transform (RRO-NRDPWT), the SQ scheme is derived with a three-stage design process that first uses genetic algorithm (GA) for high compression ratio (CR), followed by a quadratic curve fitting for linear distortion control, and the third uses a fuzzy decision-making for minimising data dependency effect and selecting the optimal SQ. The two databases, Physikalisch-Technische Bundesanstalt (PTB) and Massachusetts Institute of Technology (MIT) arrhythmia, are used to evaluate quality control performance. Experimental results show that the design method guarantees a high compression performance SQ scheme with statistically linear distortion. This property can be independent of training data and can facilitate rapid error control.
Interleukin-6 predicts recurrence and survival among head and neck cancer patients.
Duffy, Sonia A; Taylor, Jeremy M G; Terrell, Jeffrey E; Islam, Mozaffarul; Li, Yun; Fowler, Karen E; Wolf, Gregory T; Teknos, Theodoros N
2008-08-15
Increased pretreatment serum interleukin (IL)-6 levels among patients with head and neck squamous cell carcinoma (HNSCC) have been shown to correlate with poor prognosis, but sample sizes in prior studies have been small and thus unable to control for other known prognostic variables. A longitudinal, prospective cohort study determined the correlation between pretreatment serum IL-6 levels, and tumor recurrence and all-cause survival in a large population (N = 444) of previously untreated HNSCC patients. Control variables included age, sex, smoking, cancer site and stage, and comorbidities. Kaplan-Meier plots and univariate and multivariate Cox proportional hazards models were used to study the association between IL-6 levels, control variables, and time to recurrence and survival. The median serum IL-6 level was 13 pg/mL (range, 0-453). The 2-year recurrence rate was 35.2% (standard error, 2.67%). The 2-year death rate was 26.5% (standard error, 2.26%). Multivariate analyses showed that serum IL-6 levels independently predicted recurrence at significant levels [hazard ratio (HR) = 1.32; 95% confidence interval (CI), 1.11 to 1.58; P = .002] as did cancer site (oral/sinus). Serum IL-6 level was also a significant independent predictor of poor survival (HR = 1.22; 95% CI, 1.02 to 1.46; P = .03), as were older age, smoking, cancer site (oral/sinus), higher cancer stage, and comorbidities. Pretreatment serum IL-6 could be a valuable biomarker for predicting recurrence and overall survival among HNSCC patients. Using IL-6 as a biomarker for recurrence and survival may allow for earlier identification and treatment of disease relapse. 2008 American Cancer Society
Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Hug, Gabriela; Li, Xin
Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less
Creel, Scott; Creel, Michael
2009-11-01
1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results predict a substantial reduction in the limiting effect of snow accumulation on Montana elk populations in the coming decades. If other limiting factors do not operate with greater force, population growth rates would increase substantially.
Generalized Autobalanced Ramsey Spectroscopy of Clock Transitions
NASA Astrophysics Data System (ADS)
Yudin, V. I.; Taichenachev, A. V.; Basalaev, M. Yu.; Zanon-Willette, T.; Pollock, J. W.; Shuker, M.; Donley, E. A.; Kitching, J.
2018-05-01
When performing precision measurements, the quantity being measured is often perturbed by the measurement process itself. Such measurements include precision frequency measurements for atomic clock applications carried out with Ramsey spectroscopy. With the aim of eliminating probe-induced perturbations, a method of generalized autobalanced Ramsey spectroscopy (GABRS) is presented and rigorously substantiated. The usual local-oscillator frequency control loop is augmented with a second control loop derived from secondary Ramsey sequences interspersed with the primary sequences and with a different Ramsey period. This second loop feeds back to a secondary clock variable and ultimately compensates for the perturbation of the clock frequency caused by the measurements in the first loop. We show that such a two-loop scheme can lead to perfect compensation for measurement-induced light shifts and does not suffer from the effects of relaxation, time-dependent pulse fluctuations and phase-jump modulation errors that are typical of other hyper-Ramsey schemes. Several variants of GABRS are explored based on different secondary variables including added relative phase shifts between Ramsey pulses, external frequency-step compensation, and variable second-pulse duration. We demonstrate that a universal antisymmetric error signal, and hence perfect compensation at a finite modulation amplitude, is generated only if an additional frequency step applied during both Ramsey pulses is used as the concomitant variable parameter. This universal technique can be applied to the fields of atomic clocks, high-resolution molecular spectroscopy, magnetically induced and two-photon probing schemes, Ramsey-type mass spectrometry, and the field of precision measurements. Some variants of GABRS can also be applied for rf atomic clocks using coherent-population-trapping-based Ramsey spectroscopy of the two-photon dark resonance.
QUANTIFYING UNCERTAINTY IN NET PRIMARY PRODUCTION MEASUREMENTS
Net primary production (NPP, e.g., g m-2 yr-1), a key ecosystem attribute, is estimated from a combination of other variables, e.g. standing crop biomass at several points in time, each of which is subject to errors in their measurement. These errors propagate as the variables a...
GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA
In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...
Sample Size Limits for Estimating Upper Level Mediation Models Using Multilevel SEM
ERIC Educational Resources Information Center
Li, Xin; Beretvas, S. Natasha
2013-01-01
This simulation study investigated use of the multilevel structural equation model (MLSEM) for handling measurement error in both mediator and outcome variables ("M" and "Y") in an upper level multilevel mediation model. Mediation and outcome variable indicators were generated with measurement error. Parameter and standard…
Regression-assisted deconvolution.
McIntyre, Julie; Stefanski, Leonard A
2011-06-30
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.
Timing variability of reach trajectories in left versus right hemisphere stroke.
Freitas, Sandra Maria Sbeghen Ferreira; Gera, Geetanjali; Scholz, John Peter
2011-10-24
This study investigated trajectory timing variability in right and left stroke survivors and healthy controls when reaching to a centrally located target under a fixed target condition or when the target could suddenly change position after reach onset. Trajectory timing variability was investigated with a novel method based on dynamic programming that identifies the steps required to time warp one trial's acceleration time series to match that of a reference trial. Greater trajectory timing variability of both hand and joint motions was found for the paretic arm of stroke survivors compared to their non-paretic arm or either arm of controls. Overall, the non-paretic left arm of the LCVA group and the left arm of controls had higher timing variability than the non-paretic right arm of the RCVA group and right arm of controls. The shoulder and elbow joint warping costs were consistent predictors of the hand's warping cost for both left and right arms only in the LCVA group, whereas the relationship between joint and hand warping costs was relatively weak in control subjects and less consistent across arms in the RCVA group. These results suggest that the left hemisphere may be more involved in trajectory timing, although the results may be confounded by skill differences between the arms in these right hand dominant participants. On the other hand, arm differences did not appear to be related to differences in targeting error. The paretic left arm of the RCVA exhibited greater trajectory timing variability than the paretic right arm of the LCVA group. This difference was highly correlated with the level of impairment of the arms. Generally, the effect of target uncertainty resulted in slightly greater trajectory timing variability for all participants. The results are discussed in light of previous studies of hemispheric differences in the control of reaching, in particular, left hemisphere specialization for temporal control of reaching movements. Copyright © 2011 Elsevier B.V. All rights reserved.
TIMING VARIABILITY OF REACH TRAJECTORIES IN LEFT VERSUS RIGHT HEMISPHERE STROKE
Freitas, Sandra Maria Sbeghen Ferreira; Gera, Geetanjali; Scholz, John Peter
2011-01-01
This study investigated trajectory timing variability in right and left stroke survivors and healthy controls when reaching to a centrally located target under a fixed target condition or when the target could suddenly change position after reach onset. Trajectory timing variability was investigated with a novel method based on dynamic programming that identifies the steps required to time warp one trial’s acceleration time series to match that of a reference trial. Greater trajectory timing variability of both hand and joint motions was found for the paretic arm of stroke survivors compared to their non-paretic arm or either arm of controls. Overall, the non-paretic left arm of the LCVA group and the left arm of controls had higher timing variability than the non-paretic right arm of the RCVA group and right arm of controls. The shoulder and elbow joint warping costs were consistent predictors of the hand’s warping cost for both left and right arms only in the LCVA group, whereas the relationship between joint and hand warping costs was relatively weak in control subjects and less consistent across arms in the RCVA group. These results suggest that the left hemisphere may be more involved in trajectory timing, although the results may be confounded by skill differences between the arms in these right hand dominant participants. On the other hand, arm differences did not appear to be related to differences in targeting error. The paretic left arm of the RCVA exhibited greater trajectory timing variability than the paretic right arm of the LCVA group. This difference was highly correlated with the level of impairment of the arms. Generally, the effect of target uncertainty resulted in slightly greater trajectory timing variability for all participants. The results are discussed in light of previous studies of hemispheric differences in the control of reaching, in particular, left hemisphere specialization for temporal control of reaching movements. PMID:21920508
Effects of aging on control of timing and force of finger tapping.
Sasaki, Hirokazu; Masumoto, Junya; Inui, Nobuyuki
2011-04-01
The present study examined whether the elderly produced a hastened or delayed tap with a negative or positive constant intertap interval error more frequently in self-paced tapping than in the stimulus-synchronized tapping for the 2 N target force at 2 or 4 Hz frequency. The analysis showed that, at both frequencies, the percentage of the delayed tap was larger in the self-paced tapping than in the stimulus-synchronized tapping, whereas the hastened tap showed the opposite result. At the 4 Hz frequency, all age groups had more variable intertap intervals during the self-paced tapping than during the stimulus-synchronized tapping, and the variability of the intertap intervals increased with age. Thus, although the increase in the frequency of delayed taps and variable intertap intervals in the self-paced tapping perhaps resulted from a dysfunction of movement timing in the basal ganglia with age, the decline in timing accuracy was somewhat improved by an auditory cue. The force variability of tapping at 4 Hz further increased with age, indicating an effect of aging on the control of force.
Iampietro, Mary; Giovannetti, Tania; Drabick, Deborah A. G.; Kessler, Rachel K.
2013-01-01
Executive function (EF) deficits in schizophrenia (SZ) are well documented, although much less is known about patterns of EF deficits and their association to differential impairments in everyday functioning. The present study empirically defined SZ groups based on measures of various EF abilities and then compared these EF groups on everyday action errors. Participants (n=45) completed various subtests from the Delis–Kaplan Executive Function System (D-KEFS) and the Naturalistic Action Test (NAT), a performance-based measure of everyday action that yields scores reflecting total errors and a range of different error types (e.g., omission, perseveration). Results of a latent class analysis revealed three distinct EF groups, characterized by (a) multiple EF deficits, (b) relatively spared EF, and (c) perseverative responding. Follow-up analyses revealed that the classes differed significantly on NAT total errors, total commission errors, and total perseveration errors; the two classes with EF impairment performed comparably on the NAT but performed worse than the class with relatively spared EF. In sum, people with SZ demonstrate variable patterns of EF deficits, and distinct aspects of these EF deficit patterns (i.e., poor mental control abilities) may be associated with everyday functioning capabilities. PMID:23035705
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Comparison of structural and least-squares lines for estimating geologic relations
Williams, G.P.; Troutman, B.M.
1990-01-01
Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.
Metric Selection for Evaluation of Human Supervisory Control Systems
2009-12-01
finding a significant effect when there is none becomes more likely. The inflation of type I error due to multiple dependent variables can be handled...with multivariate analysis techniques, such as Multivariate Analysis of Variance (MANOVA) (Johnson & Wichern, 2002). However, it should be noted that...the few significant differences among many insignificant ones. The best way to avoid failure to identify significant differences is to design an
Obligation towards medical errors disclosure at a tertiary care hospital in Dubai, UAE
Zaghloul, Ashraf Ahmad; Rahman, Syed Azizur; Abou El-Enein, Nagwa Younes
2016-01-01
OBJECTIVE: The study aimed to identify healthcare providers’ obligation towards medical errors disclosure as well as to study the association between the severity of the medical error and the intention to disclose the error to the patients and their families. DESIGN: A cross-sectional study design was followed to identify the magnitude of disclosure among healthcare providers in different departments at a randomly selected tertiary care hospital in Dubai. SETTING AND PARTICIPANTS: The total sample size accounted for 106 respondents. Data were collected using a questionnaire composed of two sections namely; demographic variables of the respondents and a section which included variables relevant to medical error disclosure. RESULTS: Statistical analysis yielded significant association between the obligation to disclose medical errors with male healthcare providers (X2 = 5.1), and being a physician (X2 = 19.3). Obligation towards medical errors disclosure was significantly associated with those healthcare providers who had not committed any medical errors during the past year (X2 = 9.8), and any type of medical error regardless the cause, extent of harm (X2 = 8.7). Variables included in the binary logistic regression model were; status (Exp β (Physician) = 0.39, 95% CI 0.16–0.97), gender (Exp β (Male) = 4.81, 95% CI 1.84–12.54), and medical errors during the last year (Exp β (None) = 2.11, 95% CI 0.6–2.3). CONCLUSION: Education and training of physicians about disclosure conversations needs to start as early as medical school. Like the training in other competencies required of physicians, education in communicating about medical errors could help reduce physicians’ apprehension and make them more comfortable with disclosure conversations. PMID:27567766
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Clark, S; Rose, D J
2001-04-01
To establish reliability estimates of the 75% Limits of Stability Test (75% LOS test) when administered to community-dwelling older adults with a history of falls. Generalizability theory was used to estimate both the relative contribution of identified error sources to the total measurement error and generalizability coefficients. A random effects repeated-measures analysis of variance (ANOVA) was used to assess consistency of LOS test movement variables across both days and targets. A motor control research laboratory in a university setting. Fifty community-dwelling older adults with 2 or more falls in the previous year. Spatial and temporal measures of dynamic balance derived from the 75% LOS test included average movement velocity, maximum center of gravity (COG) excursion, end-point COG excursion, and directional control. Estimated generalizability coefficients for 2 testing days ranged from.58 to.87. Total variance in LOS test measures attributable to inconsistencies in day-to-day test performance (Day and Subject x Day facets) ranged from 2.5% to 8.4%. The ANOVA results indicated that no significant differences were observed in the LOS test variables across the 2 testing days. The 75% LOS test administered to older adult fallers on 2 consecutive days provides consistent and reliable measures of dynamic balance.
Design of an antagonistic shape memory alloy actuator for flap type control surfaces
NASA Astrophysics Data System (ADS)
Dönmez, Burcu; Özkan, Bülent
2011-03-01
This paper deals with the flap control of unmanned aerial vehicles (UAVs) using shape memory alloy (SMA) actuators in an antagonistic configuration. The use of SMA actuators has the advantage of significant weight and cost reduction over the conventional actuation of the UAV flaps by electric motors or hydraulic actuators. In antagonistic configuration, two SMA actuators are used: one to rotate the flap clockwise and the other to rotate the flap counterclockwise. In this content, mathematical modeling of strain and power dissipation of SMA wire is obtained through characterization tests. Afterwards, the model of the antagonistic flap mechanism is derived. Later, based on these models both flap angle and power dissipation of the SMA wire are controlled in two different loops employing proportional-integral type and neural network based control schemes. The angle commands are converted to power commands through the outer loop controller later, which are updated using the error in the flap angle induced because of the indirect control and external effects. In this study, power consumption of the wire is introduced as a new internal feedback variable. Constructed simulation models are run and performance specifications of the proposed control systems are investigated. Consequently, it is shown that proposed controllers perform well in terms of achieving small tracking errors.
Kalman Filter Estimation of Spinning Spacecraft Attitude using Markley Variables
NASA Technical Reports Server (NTRS)
Sedlak, Joseph E.; Harman, Richard
2004-01-01
There are several different ways to represent spacecraft attitude and its time rate of change. For spinning or momentum-biased spacecraft, one particular representation has been put forward as a superior parameterization for numerical integration. Markley has demonstrated that these new variables have fewer rapidly varying elements for spinning spacecraft than other commonly used representations and provide advantages when integrating the equations of motion. The current work demonstrates how a Kalman filter can be devised to estimate the attitude using these new variables. The seven Markley variables are subject to one constraint condition, making the error covariance matrix singular. The filter design presented here explicitly accounts for this constraint by using a six-component error state in the filter update step. The reduced dimension error state is unconstrained and its covariance matrix is nonsingular.
Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G
2017-07-01
Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.
Jones-Jordan, Lisa A.; Sinnott, Loraine T.; Graham, Nicholas D.; Cotter, Susan A.; Kleinstein, Robert N.; Manny, Ruth E.; Mutti, Donald O.; Twelker, J. Daniel; Zadnik, Karla
2014-01-01
Purpose. We determined the correlation between sibling refractive errors adjusted for shared and unique environmental factors using data from the Collaborative Longitudinal Evaluation of Ethnicity and Refractive Error (CLEERE) Study. Methods. Refractive error from subjects' last study visits was used to estimate the intraclass correlation coefficient (ICC) between siblings. The correlation models used environmental factors (diopter-hours and outdoor/sports activity) assessed annually from parents by survey to adjust for shared and unique environmental exposures when estimating the heritability of refractive error (2*ICC). Results. Data from 700 families contributed to the between-sibling correlation for spherical equivalent refractive error. The mean age of the children at the last visit was 13.3 ± 0.90 years. Siblings engaged in similar amounts of near and outdoor activities (correlations ranged from 0.40–0.76). The ICC for spherical equivalent, controlling for age, sex, ethnicity, and site was 0.367 (95% confidence interval [CI] = 0.304, 0.420), with an estimated heritability of no more than 0.733. After controlling for these variables, and near and outdoor/sports activities, the resulting ICC was 0.364 (95% CI = 0.304, 0.420; estimated heritability no more than 0.728, 95% CI = 0.608, 0.850). The ICCs did not differ significantly between male–female and single sex pairs. Conclusions. Adjusting for shared family and unique, child-specific environmental factors only reduced the estimate of refractive error correlation between siblings by 0.5%. Consistent with a lack of association between myopia progression and either near work or outdoor/sports activity, substantial common environmental exposures had little effect on this correlation. Genetic effects appear to have the major role in determining the similarity of refractive error between siblings. PMID:25205866
ERIC Educational Resources Information Center
Haley, Katarina L.; Jacks, Adam; Cunningham, Kevin T.
2013-01-01
Purpose: This study was conducted to evaluate the clinical utility of error variability for differentiating between apraxia of speech (AOS) and aphasia with phonemic paraphasia. Method: Participants were 32 individuals with aphasia after left cerebral injury. Diagnostic groups were formed on the basis of operationalized measures of recognized…
Quantum error correction of continuous-variable states against Gaussian noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ralph, T. C.
2011-08-15
We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.
Impulse-variability theory: implications for ballistic, multijoint motor skill performance.
Urbin, M A; Stodden, David F; Fischman, Mark G; Weimar, Wendi H
2011-01-01
Impulse-variability theory (R. A. Schmidt, H. N. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979) accounts for the curvilinear relationship between the magnitude and resulting variability of the muscular forces that influence the success of goal-directed limb movements. The historical roots of impulse-variability theory are reviewed in the 1st part of this article, including the relationship between movement speed and spatial error. The authors then address the relevance of impulse-variability theory for the control of ballistic, multijoint skills, such as throwing, striking, and kicking. These types of skills provide a stark contrast to the relatively simple, minimal degrees of freedom movements that characterized early research. However, the inherent demand for ballistic force generation is a strong parallel between these simple laboratory tasks and multijoint motor skills. Therefore, the authors conclude by recommending experimental procedures for evaluating the adequacy of impulse variability as a theoretical model within the context of ballistic, multijoint motor skill performance. Copyright © Taylor & Francis Group, LLC
Chenausky, Karen; Kernbach, Julius; Norton, Andrea; Schlaug, Gottfried
2017-01-01
We investigated the relationship between imaging variables for two language/speech-motor tracts and speech fluency variables in 10 minimally verbal (MV) children with autism. Specifically, we tested whether measures of white matter integrity-fractional anisotropy (FA) of the arcuate fasciculus (AF) and frontal aslant tract (FAT)-were related to change in percent syllable-initial consonants correct, percent items responded to, and percent syllable insertion errors (from best baseline to post 25 treatment sessions). Twenty-three MV children with autism spectrum disorder (ASD) received Auditory-Motor Mapping Training (AMMT), an intonation-based treatment to improve fluency in spoken output, and we report on seven who received a matched control treatment. Ten of the AMMT participants were able to undergo a magnetic resonance imaging study at baseline; their performance on baseline speech production measures is compared to that of the other two groups. No baseline differences were found between groups. A canonical correlation analysis (CCA) relating FA values for left- and right-hemisphere AF and FAT to speech production measures showed that FA of the left AF and right FAT were the largest contributors to the synthetic independent imaging-related variable. Change in percent syllable-initial consonants correct and percent syllable-insertion errors were the largest contributors to the synthetic dependent fluency-related variable. Regression analyses showed that FA values in left AF significantly predicted change in percent syllable-initial consonants correct, no FA variables significantly predicted change in percent items responded to, and FA of right FAT significantly predicted change in percent syllable-insertion errors. Results are consistent with previously identified roles for the AF in mediating bidirectional mapping between articulation and acoustics, and the FAT in its relationship to speech initiation and fluency. They further suggest a division of labor between the hemispheres, implicating the left hemisphere in accuracy of speech production and the right hemisphere in fluency in this population. Changes in response rate are interpreted as stemming from factors other than the integrity of these two fiber tracts. This study is the first to document the existence of a subgroup of MV children who experience increases in syllable- insertion errors as their speech develops in response to therapy.
Ruberu, S R; Langlois, G W; Masuda, M; Perera, S Kusum
2012-01-01
The receptor-binding assay (RBA) method for determining saxatoxin (STX) and its numerous analogues, which cause paralytic shellfish poisoning (PSP) in humans, was evaluated in a single laboratory study. Each step of the assay preparation procedure including the performance of the multi-detector TopCount® instrument was evaluated for its contribution to method variability. The overall inherent RBA variability was determined to be 17%. Variability within the 12 detectors was observed; however, there was no reproducible pattern in detector performance. This observed variability among detectors could be attributed to other factors, such as pipetting errors. In an attempt to reduce the number of plates rejected due to excessive variability in the method's quality control parameters, a statistical approach was evaluated using either Grubbs' test or the Student's t-test for rejecting outliers in the measurement of triplicate wells. This approach improved the ratio of accepted versus rejected plates, saving cost and time for rerunning the assay. However, the potential reduction in accuracy and the lack of improvement in precision suggests caution when using this approach. The current study has recommended an alternate quality control procedure for accepting or rejecting plates in place of the criteria currently used in the published assay, or the alternative of outlier testing. The recommended procedure involves the development of control charts to monitor the critical parameters identified in the published method (QC sample, EC₅₀, slope of calibration curve), with the addition of a fourth critical parameter which is the top value (100% binding) of the calibration curve.
Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.
Olsen, Thomas
2007-02-01
This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p < 0.00001). The number of predictions within +/- 0.5 D, +/- 1.0 D and +/- 2.0 D of the expected outcome was 62.5%, 92.4% and 99.9% with PCI, compared with 45.5%, 77.3% and 98.4% with ultrasound, respectively (p < 0.00001). The 2-variable ACD method resulted in an average error in PCI predictions of 0.46 D, which was significantly higher than the error in the 5-variable method (p < 0.001). The accuracy of IOL power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.
Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T
2018-03-01
The short-term temporal variability of landfill methane emissions is not well understood due to uncertainty in measurement methods. Significant variability is seen over short-term measurement campaigns with the tracer dilution method (TDM), but this variability may be due in part to measurement error rather than fluctuations in the actual landfill emissions. In this study, landfill methane emissions and TDM-measured emissions are simulated over a real landfill in Delaware, USA using the Weather Research and Forecasting model (WRF) for two emissions scenarios. In the steady emissions scenario, a constant landfill emissions rate is prescribed at each model grid point on the surface of the landfill. In the unsteady emissions scenario, emissions are calculated at each time step as a function of the local surface wind speed, resulting in variable emissions over each 1.5-h measurement period. The simulation output is used to assess the standard deviation and percent error of the TDM-measured emissions. Eight measurement periods are simulated over two different days to look at different conditions. Results show that standard deviation of the TDM- measured emissions does not increase significantly from the steady emissions simulations to the unsteady emissions scenarios, indicating that the TDM may have inherent errors in its prediction of emissions fluctuations. Results also show that TDM error does not increase significantly from the steady to the unsteady emissions simulations. This indicates that introducing variability to the landfill emissions does not increase errors in the TDM at this site. Across all simulations, TDM errors range from -15% to 43%, consistent with the range of errors seen in previous TDM studies. Simulations indicate diurnal variations of methane emissions when wind effects are significant, which may be important when developing daily and annual emissions estimates from limited field data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Landing Technique and Performance in Youth Athletes After a Single Injury-Prevention Program Session
Root, Hayley; Trojian, Thomas; Martinez, Jessica; Kraemer, William; DiStefano, Lindsay J.
2015-01-01
Context Injury-prevention programs (IPPs) performed as season-long warm-ups improve injury rates, performance outcomes, and jump-landing technique. However, concerns regarding program adoption exist. Identifying the acute benefits of using an IPP compared with other warm-ups may encourage IPP adoption. Objective To examine the immediate effects of 3 warm-up protocols (IPP, static warm-up [SWU], or dynamic warm-up [DWU]) on jump-landing technique and performance measures in youth athletes. Design Randomized controlled clinical trial. Setting Gymnasiums. Patients or Other Participants Sixty male and 29 female athletes (age = 13 ± 2 years, height = 162.8 ± 12.6 cm, mass = 37.1 ± 13.5 kg) volunteered to participate in a single session. Intervention(s) Participants were stratified by age, sex, and sport and then were randomized into 1 protocol: IPP, SWU, or DWU. The IPP consisted of dynamic flexibility, strengthening, plyometric, and balance exercises and emphasized proper technique. The SWU consisted of jogging and lower extremity static stretching. The DWU consisted of dynamic lower extremity flexibility exercises. Participants were assessed for landing technique and performance measures immediately before (PRE) and after (POST) completing their warm-ups. Main Outcome Measure(s) One rater graded each jump-landing trial using the Landing Error Scoring System. Participants performed a vertical jump, long jump, shuttle run, and jump-landing task in randomized order. The averages of all jump-landing trials and performance variables were used to calculate 1 composite score for each variable at PRE and POST. Change scores were calculated (POST − PRE) for all measures. Separate 1-way (group) analyses of variance were conducted for each dependent variable (α < .05). Results No differences were observed among groups for any performance measures (P > .05). The Landing Error Scoring System scores improved after the IPP (change = −0.40 ± 1.24 errors) compared with the DWU (0.27 ± 1.09 errors) and SWU (0.43 ± 1.35 errors; P = .04). Conclusions An IPP did not impair sport performance and may have reduced injury risk, which supports the use of these programs before sport activity. PMID:26523663
Instrumental variables vs. grouping approach for reducing bias due to measurement error.
Batistatou, Evridiki; McNamee, Roseanne
2008-01-01
Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.
Analysis of the new polarimeter for the Marshall Space Flight Center vector magnetograph
NASA Technical Reports Server (NTRS)
West, E. A.
1985-01-01
The magnetograph was upgraded in both electronic control of the magnetograph hardware and in the polarization optics. The problems associated with the orignal polarimeter were: (1) field of view errors associated with the natural birefringence of the KD*P crystals; (2.) KD*P electrode failure due to the halfwave dc voltage required in one of the operational sequences; and (3) breakdown of the retardation properties of some KD*Ps when exposed to a zero to halfwave modulation (DC) scheme. The new polarimeter gives up the flexibility provided by two variable waveplates to adjust the retardances of the optics for a particular polarization measurement, but solves the problems associated with the original polarimeter. With the addition of the quartz quarterwave plates, a new optical alignment was developed to allow the remaining KD*P to correct for errors in the waveplates. The new optical alignment of the polarimeter is prescribed. The various sources of error, and how those errors are minimized so that the magnetograph can look at the transverse field in real time are discussed.
Application of square-root filtering for spacecraft attitude control
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Schmidt, S. F.; Goka, T.
1978-01-01
Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.
Sensorless Load Torque Estimation and Passivity Based Control of Buck Converter Fed DC Motor
Kumar, S. Ganesh; Thilagar, S. Hosimin
2015-01-01
Passivity based control of DC motor in sensorless configuration is proposed in this paper. Exact tracking error dynamics passive output feedback control is used for stabilizing the speed of Buck converter fed DC motor under various load torques such as constant type, fan type, propeller type, and unknown load torques. Under load conditions, sensorless online algebraic approach is proposed, and it is compared with sensorless reduced order observer approach. The former produces better response in estimating the load torque. Sensitivity analysis is also performed to select the appropriate control variables. Simulation and experimental results fully confirm the superiority of the proposed approach suggested in this paper. PMID:25893208
The Houdini Transformation: True, but Illusory.
Bentler, Peter M; Molenaar, Peter C M
2012-01-01
Molenaar (2003, 2011) showed that a common factor model could be transformed into an equivalent model without factors, involving only observed variables and residual errors. He called this invertible transformation the Houdini transformation. His derivation involved concepts from time series and state space theory. This paper verifies the Houdini transformation on a general latent variable model using algebraic methods. The results show that the Houdini transformation is illusory, in the sense that the Houdini transformed model remains a latent variable model. Contrary to common knowledge, a model that is a path model with only observed variables and residual errors may, in fact, be a latent variable model.
The Houdini Transformation: True, but Illusory
Bentler, Peter M.; Molenaar, Peter C. M.
2012-01-01
Molenaar (2003, 2011) showed that a common factor model could be transformed into an equivalent model without factors, involving only observed variables and residual errors. He called this invertible transformation the Houdini transformation. His derivation involved concepts from time series and state space theory. This paper verifies the Houdini transformation on a general latent variable model using algebraic methods. The results show that the Houdini transformation is illusory, in the sense that the Houdini transformed model remains a latent variable model. Contrary to common knowledge, a model that is a path model with only observed variables and residual errors may, in fact, be a latent variable model. PMID:23180888
Elfering, A; Semmer, N K; Grebner, S
This study investigates the link between workplace stress and the 'non-singularity' of patient safety-related incidents in the hospital setting. Over a period of 2 working weeks 23 young nurses from 19 hospitals in Switzerland documented 314 daily stressful events using a self-observation method (pocket diaries); 62 events were related to patient safety. Familiarity of safety-related events and probability of recurrence, as indicators of non-singularity, were the dependent variables in multilevel regression analyses. Predictor variables were both situational (self-reported situational control, safety compliance) and chronic variables (job stressors such as time pressure, or concentration demands and job control). Chronic work characteristics were rated by trained observers. The most frequent safety-related stressful events included incomplete or incorrect documentation (40.3%), medication errors (near misses 21%), delays in delivery of patient care (9.7%), and violent patients (9.7%). Familiarity of events and probability of recurrence were significantly predicted by chronic job stressors and low job control in multilevel regression analyses. Job stressors and low job control were shown to be risk factors for patient safety. The results suggest that job redesign to enhance job control and decrease job stressors may be an important intervention to increase patient safety.
Alcohol consumption, beverage prices and measurement error.
Young, Douglas J; Bielinska-Kwapisz, Agnieszka
2003-03-01
Alcohol price data collected by the American Chamber of Commerce Researchers Association (ACCRA) have been widely used in studies of alcohol consumption and related behaviors. A number of problems with these data suggest that they contain substantial measurement error, which biases conventional statistical estimators toward a finding of little or no effect of prices on behavior. We test for measurement error, assess the magnitude of the bias and provide an alternative estimator that is likely to be superior. The study utilizes data on per capita alcohol consumption across U.S. states and the years 1982-1997. State and federal alcohol taxes are used as instrumental variables for prices. Formal tests strongly confim the hypothesis of measurement error. Instrumental variable estimates of the price elasticity of demand range from -0.53 to -1.24. These estimates are substantially larger in absolute value than ordinary least squares estimates, which sometimes are not significantly different from zero or even positive. The ACCRA price data are substantially contaminated with measurement error, but using state and federal taxes as instrumental variables mitigates the problem.
Evaluation of a Teleform-based data collection system: a multi-center obesity research case study.
Jenkins, Todd M; Wilson Boyce, Tawny; Akers, Rachel; Andringa, Jennifer; Liu, Yanhong; Miller, Rosemary; Powers, Carolyn; Ralph Buncher, C
2014-06-01
Utilizing electronic data capture (EDC) systems in data collection and management allows automated validation programs to preemptively identify and correct data errors. For our multi-center, prospective study we chose to use TeleForm, a paper-based data capture software that uses recognition technology to create case report forms (CRFs) with similar functionality to EDC, including custom scripts to identify entry errors. We quantified the accuracy of the optimized system through a data audit of CRFs and the study database, examining selected critical variables for all subjects in the study, as well as an audit of all variables for 25 randomly selected subjects. Overall we found 6.7 errors per 10,000 fields, with similar estimates for critical (6.9/10,000) and non-critical (6.5/10,000) variables-values that fall below the acceptable quality threshold of 50 errors per 10,000 established by the Society for Clinical Data Management. However, error rates were found to widely vary by type of data field, with the highest rate observed with open text fields. Copyright © 2014 Elsevier Ltd. All rights reserved.
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-01-01
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395
Operator- and software-related post-experimental variability and source of error in 2-DE analysis.
Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo
2012-05-01
In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.
Fully anisotropic goal-oriented mesh adaptation for 3D steady Euler equations
NASA Astrophysics Data System (ADS)
Loseille, A.; Dervieux, A.; Alauzet, F.
2010-04-01
This paper studies the coupling between anisotropic mesh adaptation and goal-oriented error estimate. The former is very well suited to the control of the interpolation error. It is generally interpreted as a local geometric error estimate. On the contrary, the latter is preferred when studying approximation errors for PDEs. It generally involves non local error contributions. Consequently, a full and strong coupling between both is hard to achieve due to this apparent incompatibility. This paper shows how to achieve this coupling in three steps. First, a new a priori error estimate is proved in a formal framework adapted to goal-oriented mesh adaptation for output functionals. This estimate is based on a careful analysis of the contributions of the implicit error and of the interpolation error. Second, the error estimate is applied to the set of steady compressible Euler equations which are solved by a stabilized Galerkin finite element discretization. A goal-oriented error estimation is derived. It involves the interpolation error of the Euler fluxes weighted by the gradient of the adjoint state associated with the observed functional. Third, rewritten in the continuous mesh framework, the previous estimate is minimized on the set of continuous meshes thanks to a calculus of variations. The optimal continuous mesh is then derived analytically. Thus, it can be used as a metric tensor field to drive the mesh adaptation. From a numerical point of view, this method is completely automatic, intrinsically anisotropic, and does not depend on any a priori choice of variables to perform the adaptation. 3D examples of steady flows around supersonic and transsonic jets are presented to validate the current approach and to demonstrate its efficiency.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.
2004-01-01
Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.
[Compatible biomass models of natural spruce (Picea asperata)].
Wang, Jin Chi; Deng, Hua Feng; Huang, Guo Sheng; Wang, Xue Jun; Zhang, Lu
2017-10-01
By using nonlinear measurement error method, the compatible tree volume and above ground biomass equations were established based on the volume and biomass data of 150 sampling trees of natural spruce (Picea asperata). Two approaches, controlling directly under total aboveground biomass and controlling jointly from level to level, were used to design the compatible system for the total aboveground biomass and the biomass of four components (stem, bark, branch and foliage), and the total ground biomass could be estimated independently or estimated simultaneously in the system. The results showed that the R 2 of the one variable and bivariate compatible tree volume and aboveground biomass equations were all above 0.85, and the maximum value reached 0.99. The prediction effect of the volume equations could be improved significantly when tree height was included as predictor, while it was not significant in biomass estimation. For the compatible biomass systems, the one variable model based on controlling jointly from level to level was better than the model using controlling directly under total above ground biomass, but the bivariate models of the two methods were similar. Comparing the imitative effects of the one variable and bivariate compatible biomass models, the results showed that the increase of explainable variables could significantly improve the fitness of branch and foliage biomass, but had little effect on other components. Besides, there was almost no difference between the two methods of estimation based on the comparison.
Chen, David D; Pei, Laura; Chan, John S Y; Yan, Jin H
2012-10-01
Recent research using deliberate amplification of spatial errors to increase motor learning leads to the question of whether amplifying temporal errors may also facilitate learning. We investigated transfer effects caused by manipulating temporal constraints on learning a two-choice reaction time (CRT) task with varying degrees of stimulus-response compatibility. Thirty-four participants were randomly assigned to one of the three groups and completed 120 trials during acquisition. For every fourth trial, one group was instructed to decrease CRT by 50 msec. relative to the previous trial and a second group was instructed to increase CRT by 50 msec. The third group (the control) was told not to change their responses. After a 5-min. break, participants completed a 40-trial no-feedback transfer test. A 40-trial delayed transfer test was administered 24 hours later. During acquisition, the Decreased Reaction Time group responded faster than the two other groups, but this group also made more errors than the other two groups. In the 5-min. delayed test (immediate transfer), the Decreased Reaction Time group had faster reaction times than the other two groups, while for the 24-hr. delayed test (delayed transfer), both the Decreased Reaction Time group and Increased Reaction Time group had significantly faster reaction times than the control. For delayed transfer, both Decreased and Increased Reaction Time groups reacted significantly faster than the control group. Analyses of error scores in the transfer tests indicated revealed no significant group differences. Results were discussed with regard to the notion of practice variability and goal-setting benefits.
Wetherbee, Gregory A.; Latysh, Natalie E.; Burke, Kevin P.
2005-01-01
Six external quality-assurance programs were operated by the U.S. Geological Survey (USGS) External Quality-Assurance (QA) Project for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) from 2002 through 2003. Each program measured specific components of the overall error inherent in NADP/NTN wet-deposition measurements. The intersite-comparison program assessed the variability and bias of pH and specific conductance determinations made by NADP/NTN site operators twice per year with respect to accuracy goals. The percentage of site operators that met the pH accuracy goals decreased from 92.0 percent in spring 2002 to 86.3 percent in spring 2003. In these same four intersite-comparison studies, the percentage of site operators that met the accuracy goals for specific conductance ranged from 94.4 to 97.5 percent. The blind-audit program and the sample-handling evaluation (SHE) program evaluated the effects of routine sample handling, processing, and shipping on the chemistry of weekly NADP/NTN samples. The blind-audit program data indicated that the variability introduced by sample handling might be environmentally significant to data users for sodium, potassium, chloride, and hydrogen ion concentrations during 2002. In 2003, the blind-audit program was modified and replaced by the SHE program. The SHE program was designed to control the effects of laboratory-analysis variability. The 2003 SHE data had less overall variability than the 2002 blind-audit data. The SHE data indicated that sample handling buffers the pH of the precipitation samples and, in turn, results in slightly lower conductivity. Otherwise, the SHE data provided error estimates that were not environmentally significant to data users. The field-audit program was designed to evaluate the effects of onsite exposure, sample handling, and shipping on the chemistry of NADP/NTN precipitation samples. Field-audit results indicated that exposure of NADP/NTN wet-deposition samples to onsite conditions tended to neutralize the acidity of the samples by less than 1.0 microequivalent per liter. Onsite exposure of the sampling bucket appeared to slightly increase the concentration of most of the analytes but not to an extent that was environmentally significant to NADP data users. An interlaboratory-comparison program was used to estimate the analytical variability and bias of the NADP Central Analytical Laboratory (CAL) during 2002-03. Bias was identified in the CAL data for calcium, magnesium, sodium, potassium, ammonium, chloride, nitrate, sulfate, hydrogen ion, and specific conductance, but the absolute value of the bias was less than analytical minimum detection limits for all constituents except magnesium, nitrate, sulfate, and specific conductance. Control charts showed that CAL results were within statistical control approximately 90 percent of the time. Data for the analysis of ultrapure deionized-water samples indicated that CAL did not have problems with laboratory contamination. During 2002-03, the overall variability of data from the NADP/NTN precipitation-monitoring system was estimated using data from three collocated monitoring sites. Measurement differences of constituent concentration and deposition for paired samples from the collocated samplers were evaluated to compute error terms. The medians of the absolute percentage errors (MAEs) for the paired samples generally were larger for cations (approximately 8 to 50 percent) than for anions (approximately 3 to 33 percent). MAEs were approximately 16 to 30 percent for hydrogen-ion concentration, less than 10 percent for specific conductance, less than 5 percent for sample volume, and less than 8 percent for precipitation depth. The variability attributed to each component of the sample-collection and analysis processes, as estimated by USGS quality-assurance programs, varied among analytes. Laboratory analysis variability accounted for approximately 2 percent of the
End-to-end Coronagraphic Modeling Including a Low-order Wavefront Sensor
NASA Technical Reports Server (NTRS)
Krist, John E.; Trauger, John T.; Unwin, Stephen C.; Traub, Wesley A.
2012-01-01
To evaluate space-based coronagraphic techniques, end-to-end modeling is necessary to simulate realistic fields containing speckles caused by wavefront errors. Real systems will suffer from pointing errors and thermal and motioninduced mechanical stresses that introduce time-variable wavefront aberrations that can reduce the field contrast. A loworder wavefront sensor (LOWFS) is needed to measure these changes at a sufficiently high rate to maintain the contrast level during observations. We implement here a LOWFS and corresponding low-order wavefront control subsystem (LOWFCS) in end-to-end models of a space-based coronagraph. Our goal is to be able to accurately duplicate the effect of the LOWFS+LOWFCS without explicitly evaluating the end-to-end model at numerous time steps.
Sinusoidal visuomotor tracking: intermittent servo-control or coupled oscillations?
Russell, D M; Sternad, D
2001-12-01
In visuomotor tasks that involve accuracy demands, small directional changes in the trajectories have been taken as evidence of feedback-based error corrections. In the present study variability, or intermittency, in visuomanual tracking of sinusoidal targets was investigated. Two lines of analyses were pursued: First, the hypothesis that humans fundamentally act as intermittent servo-controllers was re-examined, probing the question of whether discontinuities in the movement trajectory directly imply intermittent control. Second, an alternative hypothesis was evaluated: that rhythmic tracking movements are generated by entrainment between the oscillations of the target and the actor, such that intermittency expresses the degree of stability. In 2 experiments, participants (N = 6 in each experiment) swung 1 of 2 different hand-held pendulums, tracking a rhythmic target that oscillated at different frequencies with a constant amplitude. In 1 line of analyses, the authors tested the intermittency hypothesis by using the typical kinematic error measures and spectral analysis. In a 2nd line, they examined relative phase and its variability, following analyses of rhythmic interlimb coordination. The results showed that visually guided corrective processes play a role, especially for slow movements. Intermittency, assessed as frequency and power components of the movement trajectory, was found to change as a function of both target frequency and the manipulandum's inertia. Support for entrainment was found in conditions in which task frequency was identical to or higher than the effector's eigenfrequency. The results suggest that it is the symmetry between task and effector that determines which behavioral regime is dominant.
Williams, Camille K; Grierson, Lawrence E M; Carnahan, Heather
2011-08-01
A link between affect and action has been supported by the discovery that threat information is prioritized through an action-centred pathway--the dorsal visual stream. Magnocellular afferents, which originate from the retina and project to dorsal stream structures, are suppressed by exposure to diffuse red light, which diminishes humans' perception of threat-based images. In order to explore the role of colour in the relationship between affect and action, participants donned different pairs of coloured glasses (red, yellow, green, blue and clear) and completed Positive and Negative Affect Scale questionnaires as well as a series of target-directed aiming movements. Analyses of affect scores revealed a significant main effect for affect valence and a significant interaction between colour and valence: perceived positive affect was significantly smaller for the red condition. Kinematic analyses of variable error in the primary movement direction and Pearson correlation analyses between the displacements travelled prior to and following peak velocity indicated reduced accuracy and application of online control processes while wearing red glasses. Variable error of aiming was also positively and significantly correlated with negative affect scores under the red condition. These results suggest that only red light modulates the affect-action link by suppressing magnocellular activity, which disrupts visual processing for movement control. Furthermore, previous research examining the effect of the colour red on psychomotor tasks and perceptual acceleration of threat-based imagery suggest that stimulus-driven motor performance tasks requiring online control may be particularly susceptible to this effect.
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system. Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System spacecraft system.Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.
NASA Technical Reports Server (NTRS)
Groves, Curtis E.
2013-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This proposal describes an approach to validate the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft. The research described here is absolutely cutting edge. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional"validation by test only'' mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computationaf Fluid Dynamics can be used to veritY these requirements; however, the model must be validated by test data. The proposed research project includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT and OPEN FOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid . . . Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system. Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. To date, the author is the only person to look at the uncertainty in the entire computational domain. For the flow regime being analyzed (turbulent, threedimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.
Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE
NASA Astrophysics Data System (ADS)
Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.
2015-12-01
Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
An improved VSS NLMS algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan
2017-08-01
In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.
The Influence of Item Calibration Error on Variable-Length Computerized Adaptive Testing
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2013-01-01
Variable-length computerized adaptive testing (VL-CAT) allows both items and test length to be "tailored" to examinees, thereby achieving the measurement goal (e.g., scoring precision or classification) with as few items as possible. Several popular test termination rules depend on the standard error of the ability estimate, which in turn depends…
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Research of digital controlled DC/DC converter based on STC12C5410AD
NASA Astrophysics Data System (ADS)
Chen, Dan-Jiang; Jin, Xin; Xiao, Zhi-Hong
2010-02-01
In order to study application of digital control technology on DC/DC converter, principle of increment mode PID control algorithm was analyzed in the paper. Then, a SCM named STC12C5410AD was introduced with its internal resources and characteristics. The PID control algorithm can be implemented easily based on it. The output of PID control was used to change the value of a variable that is 255 times than duty cycle, and this reduced the error of calculation. The valid of the presented algorithm was verified by an experiment for a BUCK DC/DC converter. The experimental results indicated that output voltage of the BUCK converter is stable with low ripple.
What to use to express the variability of data: Standard deviation or standard error of mean?
Barde, Mohini P; Barde, Prajakt J
2012-07-01
Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.
Implementation of a self-controlling heater
NASA Technical Reports Server (NTRS)
Strange, M. G.
1973-01-01
Temperature control of radiation sensors, targets, and other critical components is a common requirement in modern scientific instruments. Conventional control systems use a heater and a temperature sensor mounted on the body to be controlled. For proportional control, the sensor provides feedback to circuitry which drives the heater with an amount of power proportional to the temperature error. It is impractical or undesirable to mount both a heater and a sensor on certain components such as ultra-small parts or thin filaments. In principle, a variable current through the element is used for heating, and the change in voltage drop due to the element's temperature coefficient is separated and used to monitor or control its own temperature. Since there are no thermal propagation delays between heater and sensor, such control systems are exceptionally stable.
The development and evaluation of accident predictive models
NASA Astrophysics Data System (ADS)
Maleck, T. L.
1980-12-01
A mathematical model that will predict the incremental change in the dependent variables (accident types) resulting from changes in the independent variables is developed. The end product is a tool for estimating the expected number and type of accidents for a given highway segment. The data segments (accidents) are separated in exclusive groups via a branching process and variance is further reduced using stepwise multiple regression. The standard error of the estimate is calculated for each model. The dependent variables are the frequency, density, and rate of 18 types of accidents among the independent variables are: district, county, highway geometry, land use, type of zone, speed limit, signal code, type of intersection, number of intersection legs, number of turn lanes, left-turn control, all-red interval, average daily traffic, and outlier code. Models for nonintersectional accidents did not fit nor validate as well as models for intersectional accidents.
Multiple regression for physiological data analysis: the problem of multicollinearity.
Slinker, B K; Glantz, S A
1985-07-01
Multiple linear regression, in which several predictor variables are related to a response variable, is a powerful statistical tool for gaining quantitative insight into complex in vivo physiological systems. For these insights to be correct, all predictor variables must be uncorrelated. However, in many physiological experiments the predictor variables cannot be precisely controlled and thus change in parallel (i.e., they are highly correlated). There is a redundancy of information about the response, a situation called multicollinearity, that leads to numerical problems in estimating the parameters in regression equations; the parameters are often of incorrect magnitude or sign or have large standard errors. Although multicollinearity can be avoided with good experimental design, not all interesting physiological questions can be studied without encountering multicollinearity. In these cases various ad hoc procedures have been proposed to mitigate multicollinearity. Although many of these procedures are controversial, they can be helpful in applying multiple linear regression to some physiological problems.
The importance of environmental variability and management control error to optimal harvest policies
Hunter, C.M.; Runge, M.C.
2004-01-01
State-dependent strategies (SDSs) are the most general form of harvest policy because they allow the harvest rate to depend, without constraint, on the state of the system. State-dependent strategies that provide an optimal harvest rate for any system state can be calculated, and stochasticity can be appropriately accommodated in this optimization. Stochasticity poses 2 challenges to harvest policies: (1) the population will never be at the equilibrium state; and (2) stochasticity induces uncertainty about future states. We investigated the effects of 2 types of stochasticity, environmental variability and management control error, on SDS harvest policies for a white-tailed deer (Odocoileus virginianus) model, and contrasted these with a harvest policy based on maximum sustainable yield (MSY). Increasing stochasticity resulted in more conservative SDSs; that is, higher population densities were required to support the same harvest rate, but these effects were generally small. As stochastic effects increased, SDSs performed much better than MSY. Both deterministic and stochastic SDSs maintained maximum mean annual harvest yield (AHY) and optimal equilibrium population size (Neq) in a stochastic environment, whereas an MSY policy could not. We suggest 3 rules of thumb for harvest management of long-lived vertebrates in stochastic systems: (1) an SDS is advantageous over an MSY policy, (2) using an SDS rather than an MSY is more important than whether a deterministic or stochastic SDS is used, and (3) for SDSs, rankings of the variability in management outcomes (e.g., harvest yield) resulting from parameter stochasticity can be predicted by rankings of the deterministic elasticities.
Modeling the effects of high-G stress on pilots in a tracking task
NASA Technical Reports Server (NTRS)
Korn, J.; Kleinman, D. L.
1978-01-01
Air-to-air tracking experiments were conducted at the Aerospace Medical Research Laboratories using both fixed and moving base dynamic environment simulators. The obtained data, which includes longitudinal error of a simulated air-to-air tracking task as well as other auxiliary variables, was analyzed using an ensemble averaging method. In conjunction with these experiments, the optimal control model is applied to model a human operator under high-G stress.
Wang, Minlin; Ren, Xuemei; Chen, Qiang
2018-01-01
The multi-motor servomechanism (MMS) is a multi-variable, high coupling and nonlinear system, which makes the controller design challenging. In this paper, an adaptive robust H-infinity control scheme is proposed to achieve both the load tracking and multi-motor synchronization of MMS. This control scheme consists of two parts: a robust tracking controller and a distributed synchronization controller. The robust tracking controller is constructed by incorporating a neural network (NN) K-filter observer into the dynamic surface control, while the distributed synchronization controller is designed by combining the mean deviation coupling control strategy with the distributed technique. The proposed control scheme has several merits: 1) by using the mean deviation coupling synchronization control strategy, the tracking controller and the synchronization controller can be designed individually without any coupling problem; 2) the immeasurable states and unknown nonlinearities are handled by a NN K-filter observer, where the number of NN weights is largely reduced by using the minimal learning parameter technique; 3) the H-infinity performances of tracking error and synchronization error are guaranteed by introducing a robust term into the tracking controller and the synchronization controller, respectively. The stabilities of the tracking and synchronization control systems are analyzed by the Lyapunov theory. Simulation and experimental results based on a four-motor servomechanism are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Laforest, Martin
Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for single and multi qubit systems. Even though liquid state NMR is argued to be unsuitable for scalable quantum information processing, it remains the best test-bed system to experimentally implement, verify and develop protocols aimed at increasing the control over general quantum information processors. For this reason, all the protocols described in this thesis have been implemented in liquid state NMR, which then led to further development of control and analysis techniques.
Flanagan, Emma C; Wong, Stephanie; Dutt, Aparna; Tu, Sicong; Bertoux, Maxime; Irish, Muireann; Piguet, Olivier; Rao, Sulakshana; Hodges, John R; Ghosh, Amitabha; Hornberger, Michael
2016-01-01
Episodic memory recall processes in Alzheimer's disease (AD) and behavioral variant frontotemporal dementia (bvFTD) can be similarly impaired, whereas recognition performance is more variable. A potential reason for this variability could be false-positive errors made on recognition trials and whether these errors are due to amnesia per se or a general over-endorsement of recognition items regardless of memory. The current study addressed this issue by analysing recognition performance on the Rey Auditory Verbal Learning Test (RAVLT) in 39 bvFTD, 77 AD and 61 control participants from two centers (India, Australia), as well as disinhibition assessed using the Hayling test. Whereas both AD and bvFTD patients were comparably impaired on delayed recall, bvFTD patients showed intact recognition performance in terms of the number of correct hits. However, both patient groups endorsed significantly more false-positives than controls, and bvFTD and AD patients scored equally poorly on a sensitivity index (correct hits-false-positives). Furthermore, measures of disinhibition were significantly associated with false positives in both groups, with a stronger relationship with false-positives in bvFTD. Voxel-based morphometry analyses revealed similar neural correlates of false positive endorsement across bvFTD and AD, with both patient groups showing involvement of prefrontal and Papez circuitry regions, such as medial temporal and thalamic regions, and a DTI analysis detected an emerging but non-significant trend between false positives and decreased fornix integrity in bvFTD only. These findings suggest that false-positive errors on recognition tests relate to similar mechanisms in bvFTD and AD, reflecting deficits in episodic memory processes and disinhibition. These findings highlight that current memory tests are not sufficient to accurately distinguish between bvFTD and AD patients.
Ni, Hsing-Chang; Hwang Gu, Shoou-Lian; Lin, Hsiang-Yuan; Lin, Yu-Ju; Yang, Li-Kuang; Huang, Hui-Chun; Gau, Susan Shur-Fen
2016-05-01
Intra-individual variability in reaction time (IIV-RT) is common in individuals with attention-deficit/hyperactivity disorder (ADHD). It can be improved by stimulants. However, the effects of atomoxetine on IIV-RT are inconclusive. We aimed to investigate the effects of atomoxetine on IIV-RT, and directly compared its efficacy with methylphenidate in adults with ADHD. An 8-10 week, open-label, head-to-head, randomized clinical trial was conducted in 52 drug-naïve adults with ADHD, who were randomly assigned to two treatment groups: immediate-release methylphenidate (n=26) thrice daily (10-20 mg per dose) and atomoxetine once daily (n=26) (0.5-1.2 mg/kg/day). IIV-RT, derived from the Conners' continuous performance test (CCPT), was represented by the Gaussian (reaction time standard error, RTSE) and ex-Gaussian models (sigma and tau). Other neuropsychological functions, including response errors and mean of reaction time, were also measured. Participants received CCPT assessments at baseline and week 8-10 (60.4±6.3 days). We found comparable improvements in performances of CCPT between the immediate-release methylphenidate- and atomoxetine-treated groups. Both medications significantly improved IIV-RT in terms of reducing tau values with comparable efficacy. In addition, both medications significantly improved inhibitory control by reducing commission errors. Our results provide evidence to support that atomoxetine could improve IIV-RT and inhibitory control, of comparable efficacy with immediate-release methylphenidate, in drug-naïve adults with ADHD. Shared and unique mechanisms underpinning these medication effects on IIV-RT awaits further investigation. © The Author(s) 2016.
Wang, Ching-Yun; Song, Xiao
2017-01-01
SUMMARY Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women’s Health Initiative. PMID:27546625
The Impact of Variable Wind Shear Coefficients on Risk Reduction of Wind Energy Projects
Thomson, Allan; Yoonesi, Behrang; McNutt, Josiah
2016-01-01
Estimation of wind speed at proposed hub heights is typically achieved using a wind shear exponent or wind shear coefficient (WSC), variation in wind speed as a function of height. The WSC is subject to temporal variation at low and high frequencies, ranging from diurnal and seasonal variations to disturbance caused by weather patterns; however, in many cases, it is assumed that the WSC remains constant. This assumption creates significant error in resource assessment, increasing uncertainty in projects and potentially significantly impacting the ability to control gird connected wind generators. This paper contributes to the body of knowledge relating to the evaluation and assessment of wind speed, with particular emphasis on the development of techniques to improve the accuracy of estimated wind speed above measurement height. It presents an evaluation of the use of a variable wind shear coefficient methodology based on a distribution of wind shear coefficients which have been implemented in real time. The results indicate that a VWSC provides a more accurate estimate of wind at hub height, ranging from 41% to 4% reduction in root mean squared error (RMSE) between predicted and actual wind speeds when using a variable wind shear coefficient at heights ranging from 33% to 100% above the highest actual wind measurement. PMID:27872898
Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L
2010-04-01
The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.
The predicted CLARREO sampling error of the inter-annual SW variability
NASA Astrophysics Data System (ADS)
Doelling, D. R.; Keyes, D. F.; Nguyen, C.; Macdonnell, D.; Young, D. F.
2009-12-01
The NRC Decadal Survey has called for SI traceability of long-term hyper-spectral flux measurements in order to monitor climate variability. This mission is called the Climate Absolute Radiance and Refractivity Observatory (CLARREO) and is currently defining its mission requirements. The requirements are focused on the ability to measure decadal change of key climate variables at very high accuracy. The accuracy goals are set using anticipated climate change magnitudes, but the accuracy achieved for any given climate variable must take into account the temporal and spatial sampling errors based on satellite orbits and calibration accuracy. The time period to detect a significant trend in the CLARREO record depends on the magnitude of the sampling calibration errors relative to the current inter-annual variability. The largest uncertainty in climate feedbacks remains the effect of changing clouds on planetary energy balance. Some regions on earth have strong diurnal cycles, such as maritime stratus and afternoon land convection; other regions have strong seasonal cycles, such as the monsoon. However, when monitoring inter-annual variability these cycles are only important if the strength of these cycles vary on decadal time scales. This study will attempt to determine the best satellite constellations to reduce sampling error and to compare the error with the current inter-annual variability signal to ensure the viability of the mission. The study will incorporate Clouds and the Earth's Radiant Energy System (CERES) (Monthly TOA/Surface Averages) SRBAVG product TOA LW and SW climate quality fluxes. The fluxes are derived by combining Terra (10:30 local equator crossing time) CERES fluxes with 3-hourly 5-geostationary satellite estimated broadband fluxes, which are normalized using the CERES fluxes, to complete the diurnal cycle. These fluxes were saved hourly during processing and considered the truth dataset. 90°, 83° and 74° inclination precessionary orbits as well as sun-synchronous orbits will be evaluated. This study will focus on the SW radiance, since these low earth orbits are only in daylight for half the orbit. The precessionary orbits were designed to cycle through all solar zenith angles over the course of a year. The inter-annual variability sampling error will be stratified globally/zonally and annually/seasonally and compared with the corresponding truth anomalies.
NASA Astrophysics Data System (ADS)
Penn, C. A.; Clow, D. W.; Sexstone, G. A.
2017-12-01
Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a spatially-distributed physics-based snow model was used to assess possible effects of land cover change on snowpack properties. Trends in forecasted error are variable while baseline model results show a consistent under-prediction in the recent decade, highlighting possible compounding effects of climate and land cover changes.
Quality control of 3D Geological Models using an Attention Model based on Gaze
NASA Astrophysics Data System (ADS)
Busschers, Freek S.; van Maanen, Peter-Paul; Brouwer, Anne-Marie
2014-05-01
The Geological Survey of the Netherlands (GSN) produces 3D stochastic geological models of the upper 50 meters of the Dutch subsurface. The voxel models are regarded essential in answering subsurface questions on, for example, aggregate resources, groundwater flow, land subsidence studies and the planning of large-scale infrastructural works such as tunnels. GeoTOP is the most recent and detailed generation of 3D voxel models. This model describes 3D lithological variability up to a depth of 50 m using voxels of 100*100*0.5m. Due to the expected increase in data-flow, model output and user demands, the development of (semi-)automated quality control systems is getting more important in the near future. Besides numerical control systems, capturing model errors as seen from the expert geologist viewpoint is of increasing interest. We envision the use of eye gaze to support and speed up detection of errors in the geological voxel models. As a first step in this direction we explore gaze behavior of 12 geological experts from the GSN during quality control of part of the GeoTOP 3D geological model using an eye-tracker. Gaze is used as input of an attention model that results in 'attended areas' for each individual examined image of the GeoTOP model and each individual expert. We compared these attended areas to errors as marked by the experts using a mouse. Results show that: 1) attended areas as determined from experts' gaze data largely match with GeoTOP errors as indicated by the experts using a mouse, and 2) a substantial part of the match can be reached using only gaze data from the first few seconds of the time geologists spend to search for errors. These results open up the possibility of faster GeoTOP model control using gaze if geologists accept a small decrease of error detection accuracy. Attention data may also be used to make independent comparisons between different geologists varying in focus and expertise. This would facilitate a more effective use of experts in specific different projects or areas. Part of such a procedure could be to confront geological experts with their own results, allowing possible training steps in order to improve their geological expertise and eventually improve the GeoTop model. Besides the directions as indicated above, future research should focus on concrete implementation of facilitating and optimizing error detection in present and future 3D voxel models that are commonly characterized by very large amounts of data.
NASA Astrophysics Data System (ADS)
Carter, Jeffrey R.; Simon, Wayne E.
1990-08-01
Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and
Temporal Lobe Epilepsy Alters Auditory-motor Integration For Voice Control
Li, Weifeng; Chen, Ziyi; Yan, Nan; Jones, Jeffery A.; Guo, Zhiqiang; Huang, Xiyan; Chen, Shaozhen; Liu, Peng; Liu, Hanjun
2016-01-01
Temporal lobe epilepsy (TLE) is the most common drug-refractory focal epilepsy in adults. Previous research has shown that patients with TLE exhibit decreased performance in listening to speech sounds and deficits in the cortical processing of auditory information. Whether TLE compromises auditory-motor integration for voice control, however, remains largely unknown. To address this question, event-related potentials (ERPs) and vocal responses to vocal pitch errors (1/2 or 2 semitones upward) heard in auditory feedback were compared across 28 patients with TLE and 28 healthy controls. Patients with TLE produced significantly larger vocal responses but smaller P2 responses than healthy controls. Moreover, patients with TLE exhibited a positive correlation between vocal response magnitude and baseline voice variability and a negative correlation between P2 amplitude and disease duration. Graphical network analyses revealed a disrupted neuronal network for patients with TLE with a significant increase of clustering coefficients and path lengths as compared to healthy controls. These findings provide strong evidence that TLE is associated with an atypical integration of the auditory and motor systems for vocal pitch regulation, and that the functional networks that support the auditory-motor processing of pitch feedback errors differ between patients with TLE and healthy controls. PMID:27356768
Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback
Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching
2017-01-01
Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658
Strand, Matthew; Sillau, Stefan; Grunwald, Gary K; Rabinovitch, Nathan
2014-02-10
Regression calibration provides a way to obtain unbiased estimators of fixed effects in regression models when one or more predictors are measured with error. Recent development of measurement error methods has focused on models that include interaction terms between measured-with-error predictors, and separately, methods for estimation in models that account for correlated data. In this work, we derive explicit and novel forms of regression calibration estimators and associated asymptotic variances for longitudinal models that include interaction terms, when data from instrumental and unbiased surrogate variables are available but not the actual predictors of interest. The longitudinal data are fit using linear mixed models that contain random intercepts and account for serial correlation and unequally spaced observations. The motivating application involves a longitudinal study of exposure to two pollutants (predictors) - outdoor fine particulate matter and cigarette smoke - and their association in interactive form with levels of a biomarker of inflammation, leukotriene E4 (LTE 4 , outcome) in asthmatic children. Because the exposure concentrations could not be directly observed, we used measurements from a fixed outdoor monitor and urinary cotinine concentrations as instrumental variables, and we used concentrations of fine ambient particulate matter and cigarette smoke measured with error by personal monitors as unbiased surrogate variables. We applied the derived regression calibration methods to estimate coefficients of the unobserved predictors and their interaction, allowing for direct comparison of toxicity of the different pollutants. We used simulations to verify accuracy of inferential methods based on asymptotic theory. Copyright © 2013 John Wiley & Sons, Ltd.
Error Propagation Made Easy--Or at Least Easier
ERIC Educational Resources Information Center
Gardenier, George H.; Gui, Feng; Demas, James N.
2011-01-01
Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
ERIC Educational Resources Information Center
Nicewander, W. Alan
2018-01-01
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
Francq, Bernard G; Govaerts, Bernadette
2016-06-30
Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan
2016-05-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.
A nonlinear Kalman filtering approach to embedded control of turbocharged diesel engines
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Arsie, Ivan
2014-10-01
The development of efficient embedded control for turbocharged Diesel engines, requires the programming of elaborated nonlinear control and filtering methods. To this end, in this paper nonlinear control for turbocharged Diesel engines is developed with the use of Differential flatness theory and the Derivative-free nonlinear Kalman Filter. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. To compensate for modeling errors and external disturbances the Derivative-free nonlinear Kalman Filter is used and redesigned as a disturbance observer. The filter consists of the Kalman Filter recursion on the linearized equivalent of the Diesel engine model and of an inverse transformation based on differential flatness theory which enables to obtain estimates for the state variables of the initial nonlinear model. Once the disturbances variables are identified it is possible to compensate them by including an additional control term in the feedback loop. The efficiency of the proposed control method is tested through simulation experiments.
Fuzzy logic based robotic controller
NASA Technical Reports Server (NTRS)
Attia, F.; Upadhyaya, M.
1994-01-01
Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.
Inventory implications of using sampling variances in estimation of growth model coefficients
Albert R. Stage; William R. Wykoff
2000-01-01
Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...
C-fuzzy variable-branch decision tree with storage and classification error rate constraints
NASA Astrophysics Data System (ADS)
Yang, Shiueng-Bien
2009-10-01
The C-fuzzy decision tree (CFDT), which is based on the fuzzy C-means algorithm, has recently been proposed. The CFDT is grown by selecting the nodes to be split according to its classification error rate. However, the CFDT design does not consider the classification time taken to classify the input vector. Thus, the CFDT can be improved. We propose a new C-fuzzy variable-branch decision tree (CFVBDT) with storage and classification error rate constraints. The design of the CFVBDT consists of two phases-growing and pruning. The CFVBDT is grown by selecting the nodes to be split according to the classification error rate and the classification time in the decision tree. Additionally, the pruning method selects the nodes to prune based on the storage requirement and the classification time of the CFVBDT. Furthermore, the number of branches of each internal node is variable in the CFVBDT. Experimental results indicate that the proposed CFVBDT outperforms the CFDT and other methods.
Mühlberger, A; Jekel, K; Probst, T; Schecklmann, M; Conzelmann, A; Andreatta, M; Rizzo, A A; Pauli, P; Romanos, M
2016-05-13
This study compares the performance in a continuous performance test within a virtual reality classroom (CPT-VRC) between medicated children with ADHD, unmedicated children with ADHD, and healthy children. N = 94 children with ADHD (n = 26 of them received methylphenidate and n = 68 were unmedicated) and n = 34 healthy children performed the CPT-VRC. Omission errors, reaction time/variability, commission errors, and body movements were assessed. Furthermore, ADHD questionnaires were administered and compared with the CPT-VRC measures. The unmedicated ADHD group exhibited more omission errors and showed slower reaction times than the healthy group. Reaction time variability was higher in the unmedicated ADHD group compared with both the healthy and the medicated ADHD group. Omission errors and reaction time variability were associated with inattentiveness ratings of experimenters. Head movements were correlated with hyperactivity ratings of parents and experimenters. Virtual reality is a promising technology to assess ADHD symptoms in an ecologically valid environment. © The Author(s) 2016.
A Study of Vicon System Positioning Performance.
Merriaux, Pierre; Dupuis, Yohan; Boutteau, Rémi; Vasseur, Pascal; Savatier, Xavier
2017-07-07
Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today's life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.
A predictability study of Lorenz's 28-variable model as a dynamical system
NASA Technical Reports Server (NTRS)
Krishnamurthy, V.
1993-01-01
The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.
Measurement error is often neglected in medical literature: a systematic review.
Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten
2018-06-01
In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wei, Xianggeng; Li, Jiang; He, Guoqiang
2017-04-01
The vortex valve solid variable thrust motor is a new solid motor which can achieve Vehicle system trajectory optimization and motor energy management. Numerical calculation was performed to investigate the influence of vortex chamber diameter, vortex chamber shape, and vortex chamber height of the vortex valve solid variable thrust motor on modulation performance. The test results verified that the calculation results are consistent with laboratory results with a maximum error of 9.5%. The research drew the following major conclusions: the optimal modulation performance was achieved in a cylindrical vortex chamber, increasing the vortex chamber diameter improved the modulation performance of the vortex valve solid variable thrust motor, optimal modulation performance could be achieved when the height of the vortex chamber is half of the vortex chamber outlet diameter, and the hot gas control flow could result in an enhancement of modulation performance. The results can provide the basis for establishing the design method of the vortex valve solid variable thrust motor.
Iqbal, Muhammad; Rehan, Muhammad; Khaliq, Abdul; Saeed-ur-Rehman; Hong, Keum-Shik
2014-01-01
This paper investigates the chaotic behavior and synchronization of two different coupled chaotic FitzHugh-Nagumo (FHN) neurons with unknown parameters under external electrical stimulation (EES). The coupled FHN neurons of different parameters admit unidirectional and bidirectional gap junctions in the medium between them. Dynamical properties, such as the increase in synchronization error as a consequence of the deviation of neuronal parameters for unlike neurons, the effect of difference in coupling strengths caused by the unidirectional gap junctions, and the impact of large time-delay due to separation of neurons, are studied in exploring the behavior of the coupled system. A novel integral-based nonlinear adaptive control scheme, to cope with the infeasibility of the recovery variable, for synchronization of two coupled delayed chaotic FHN neurons of different and unknown parameters under uncertain EES is derived. Further, to guarantee robust synchronization of different neurons against disturbances, the proposed control methodology is modified to achieve the uniformly ultimately bounded synchronization. The parametric estimation errors can be reduced by selecting suitable control parameters. The effectiveness of the proposed control scheme is illustrated via numerical simulations.
Positive sliding mode control for blood glucose regulation
NASA Astrophysics Data System (ADS)
Menani, Karima; Mohammadridha, Taghreed; Magdelaine, Nicolas; Abdelaziz, Mourad; Moog, Claude H.
2017-11-01
Biological systems involving positive variables as concentrations are some examples of so-called positive systems. This is the case of the glycemia-insulinemia system considered in this paper. To cope with these physical constraints, it is shown that a positive sliding mode control (SMC) can be designed for glycemia regulation. The largest positive invariant set (PIS) is obtained for the insulinemia subsystem in open and closed loop. The existence of a positive SMC for glycemia regulation is shown here for the first time. Necessary conditions to design the sliding surface and the discontinuity gain are derived to guarantee a positive SMC for the insulin dynamics. SMC is designed to be positive everywhere in the largest closed-loop PIS of plasma insulin system. Two-stage SMC is employed; the last stage SMC2 block uses the glycemia error to design the desired insulin trajectory. Then the plasma insulin state is forced to track the reference via SMC1. The resulting desired insulin trajectory is the required virtual control input of the glycemia system to eliminate blood glucose (BG) error. The positive control is tested in silico on type-1 diabetic patients model derived from real-life clinical data.
Lim, Jun-Seok; Pang, Hee-Suk
2016-01-01
In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
Cognition-Action Trade-Offs Reflect Organization of Attention in Infancy.
Berger, Sarah E; Harbourne, Regina T; Horger, Melissa N
2018-01-01
This chapter discusses what cognition-action trade-offs in infancy reveal about the organization and developmental trajectory of attention. We focus on internal attention because this aspect is most relevant to the immediate concerns of infancy, such as fluctuating levels of expertise, balancing multiple taxing skills simultaneously, learning how to control attention under variable conditions, and coordinating distinct psychological domains. Cognition-action trade-offs observed across the life span include perseveration during skill emergence, errors and inefficient strategies during decision making, and the allocation of resources when attention is taxed. An embodied cognitive-load account interprets these behavioral patterns as a result of limited attentional resources allocated across simultaneous, taxing task demands. For populations where motor errors could be costly, like infants and the elderly, attention is typically devoted to motor demands with errors occurring in the cognitive domain. In contrast, healthy young adults tend to preserve their cognitive performance by modifying their actions. © 2018 Elsevier Inc. All rights reserved.
System analysis of vehicle active safety problem
NASA Astrophysics Data System (ADS)
Buznikov, S. E.
2018-02-01
The problem of the road transport safety affects the vital interests of the most of the population and is characterized by a global level of significance. The system analysis of problem of creation of competitive active vehicle safety systems is presented as an interrelated complex of tasks of multi-criterion optimization and dynamic stabilization of the state variables of a controlled object. Solving them requires generation of all possible variants of technical solutions within the software and hardware domains and synthesis of the control, which is close to optimum. For implementing the task of the system analysis the Zwicky “morphological box” method is used. Creation of comprehensive active safety systems involves solution of the problem of preventing typical collisions. For solving it, a structured set of collisions is introduced with its elements being generated also using the Zwicky “morphological box” method. The obstacle speed, the longitudinal acceleration of the controlled object and the unpredictable changes in its movement direction due to certain faults, the road surface condition and the control errors are taken as structure variables that characterize the conditions of collisions. The conditions for preventing typical collisions are presented as inequalities for physical variables that define the state vector of the object and its dynamic limits.
Analysis of real-time numerical integration methods applied to dynamic clamp experiments.
Butera, Robert J; McCarthy, Maeve L
2004-12-01
Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.
Rong, Hao; Tian, Jin; Zhao, Tingdi
2016-01-01
In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
The Trojan Lifetime Champions Health Survey: development, validity, and reliability.
Sorenson, Shawn C; Romano, Russell; Scholefield, Robin M; Schroeder, E Todd; Azen, Stanley P; Salem, George J
2015-04-01
Self-report questionnaires are an important method of evaluating lifespan health, exercise, and health-related quality of life (HRQL) outcomes among elite, competitive athletes. Few instruments, however, have undergone formal characterization of their psychometric properties within this population. To evaluate the validity and reliability of a novel health and exercise questionnaire, the Trojan Lifetime Champions (TLC) Health Survey. Descriptive laboratory study. A large National Collegiate Athletic Association Division I university. A total of 63 university alumni (age range, 24 to 84 years), including former varsity collegiate athletes and a control group of nonathletes. Participants completed the TLC Health Survey twice at a mean interval of 23 days with randomization to the paper or electronic version of the instrument. Content validity, feasibility of administration, test-retest reliability, parallel-form reliability between paper and electronic forms, and estimates of systematic and typical error versus differences of clinical interest were assessed across a broad range of health, exercise, and HRQL measures. Correlation coefficients, including intraclass correlation coefficients (ICCs) for continuous variables and κ agreement statistics for ordinal variables, for test-retest reliability averaged 0.86, 0.90, 0.80, and 0.74 for HRQL, lifetime health, recent health, and exercise variables, respectively. Correlation coefficients, again ICCs and κ, for parallel-form reliability (ie, equivalence) between paper and electronic versions averaged 0.90, 0.85, 0.85, and 0.81 for HRQL, lifetime health, recent health, and exercise variables, respectively. Typical measurement error was less than the a priori thresholds of clinical interest, and we found minimal evidence of systematic test-retest error. We found strong evidence of content validity, convergent construct validity with the Short-Form 12 Version 2 HRQL instrument, and feasibility of administration in an elite, competitive athletic population. These data suggest that the TLC Health Survey is a valid and reliable instrument for assessing lifetime and recent health, exercise, and HRQL, among elite competitive athletes. Generalizability of the instrument may be enhanced by additional, larger-scale studies in diverse populations.
Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.
Costa, Marcelo Azevedo; Braga, Antonio Padua; de Menezes, Benjamin Rodrigues
2012-09-01
The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. Copyright © 2012 Elsevier Ltd. All rights reserved.
A silicon avalanche photodiode detector circuit for Nd:YAG laser scattering
NASA Astrophysics Data System (ADS)
Hsieh, C.-L.; Haskovec, J.; Carlstrom, T. N.; Deboo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.
1990-06-01
A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge sensitive preamplifier was developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N = 1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low frequency background light component. The background signal is amplified with a computer controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Z sub eff measurements of the plasma. The signal processing was analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.
Silicon avalanche photodiode detector circuit for Nd:YAG laser scattering
NASA Astrophysics Data System (ADS)
Hsieh, C. L.; Haskovec, J.; Carlstrom, T. N.; DeBoo, J. C.; Greenfield, C. M.; Snider, R. T.; Trost, P.
1990-10-01
A silicon avalanche photodiode with an internal gain of about 50 to 100 is used in a temperature-controlled environment to measure the Nd:YAG laser Thomson scattered spectrum in the wavelength range from 700 to 1150 nm. A charge-sensitive preamplifier has been developed for minimizing the noise contribution from the detector electronics. Signal levels as low as 20 photoelectrons (S/N=1) can be detected. Measurements show that both the signal and the variance of the signal vary linearly with the input light level over the range of interest, indicating Poisson statistics. The signal is processed using a 100 ns delay line and a differential amplifier which subtracts the low-frequency background light component. The background signal is amplified with a computer-controlled variable gain amplifier and is used for an estimate of the measurement error, calibration, and Zeff measurements of the plasma. The signal processing has been analyzed using a theoretical model to aid the system design and establish the procedure for data error analysis.
Parental age and the origin of trisomy 21. A study of 302 families.
Dagna Bricarelli, F; Pierluigi, M; Landucci, M; Arslanian, A; Coviello, D A; Ferro, M A; Strigini, P
1989-04-01
Several studies have attempted to define the role of parental age in determining the prevalence of 47, +21 according to the origin of nondisjunction. This report analyzes the original data of 197 informative families from Italy and reviews the available literature (96 families from Denmark and 201 from other countries). Mothers whose gametes showed nondisjunction are treated as cases, and those with normal meiosis as controls within each study. To utilize the data fully, maternal age at birth of a 47, +21 individual is treated as a continuous variable in a nonparametric comparison. The combined evidence indicates that nondisjunction in the female is associated with a significant age difference between cases and controls which is mostly due to errors in the second meiotic division. It may be inferred that in the general population, aging enhances nondisjunction at both first and second division in the female, while aging in the male is presumably associated mostly (or only) with first division errors. Implications and alternative models are discussed.
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
Goldmann tonometer error correcting prism: clinical evaluation.
McCafferty, Sean; Lim, Garrett; Duncan, William; Enikov, Eniko T; Schwiegerling, Jim; Levine, Jason; Kew, Corin
2017-01-01
Clinically evaluate a modified applanating surface Goldmann tonometer prism designed to substantially negate errors due to patient variability in biomechanics. A modified Goldmann prism with a correcting applanation tonometry surface (CATS) was mathematically optimized to minimize the intraocular pressure (IOP) measurement error due to patient variability in corneal thickness, stiffness, curvature, and tear film adhesion force. A comparative clinical study of 109 eyes measured IOP with CATS and Goldmann prisms. The IOP measurement differences between the CATS and Goldmann prisms were correlated to corneal thickness, hysteresis, and curvature. The CATS tonometer prism in correcting for Goldmann central corneal thickness (CCT) error demonstrated a reduction to <±2 mmHg in 97% of a standard CCT population. This compares to only 54% with CCT error <±2 mmHg using the Goldmann prism. Equal reductions of ~50% in errors due to corneal rigidity and curvature were also demonstrated. The results validate the CATS prism's improved accuracy and expected reduced sensitivity to Goldmann errors without IOP bias as predicted by mathematical modeling. The CATS replacement for the Goldmann prism does not change Goldmann measurement technique or interpretation.
Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy
Cohen, E. A. K.; Ober, R. J.
2014-01-01
We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573
A model of the human supervisor
NASA Technical Reports Server (NTRS)
Kok, J. J.; Vanwijk, R. A.
1977-01-01
A general model of the human supervisor's behavior is given. Submechanisms of the model include: the observer/reconstructor; decision-making; and controller. A set of hypothesis is postulated for the relations between the task variables and the parameters of the different submechanisms of the model. Verification of the model hypotheses is considered using variations in the task variables. An approach is suggested for the identification of the model parameters which makes use of a multidimensional error criterion. Each of the elements of this multidimensional criterion corresponds to a certain aspect of the supervisor's behavior, and is directly related to a particular part of the model and its parameters. This approach offers good possibilities for an efficient parameter adjustment procedure.
Relaxing the rule of ten events per variable in logistic and Cox regression.
Vittinghoff, Eric; McCulloch, Charles E
2007-03-15
The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.
A manual control theory analysis of vertical situation displays for STOL aircraft
NASA Technical Reports Server (NTRS)
Baron, S.; Levison, W. H.
1973-01-01
Pilot-vehicle-display systems theory is applied to the analysis of proposed vertical situation displays for manual control in approach-to-landing of a STOL aircraft. The effects of display variables on pilot workload and on total closed-loop system performance was calculated using an optimal-control model for the human operator. The steep approach of an augmentor wing jet STOL aircraft was analyzed. Both random turbulence and mean-wind shears were considered. Linearized perturbation equations were used to describe longitudinal and lateral dynamics of the aircraft. The basic display configuration was one that abstracted the essential status information (including glide-slope and localizer errors) of an EADI display. Proposed flight director displays for both longitudinal and lateral control were also investigated.
Pojić, Milica; Rakić, Dušan; Lazić, Zivorad
2015-01-01
A chemometric approach was applied for the optimization of the robustness of the NIRS method for wheat quality control. Due to the high number of experimental (n=6) and response variables to be studied (n=7) the optimization experiment was divided into two stages: screening stage in order to evaluate which of the considered variables were significant, and optimization stage to optimize the identified factors in the previously selected experimental domain. The significant variables were identified by using fractional factorial experimental design, whilst Box-Wilson rotatable central composite design (CCRD) was run to obtain the optimal values for the significant variables. The measured responses included: moisture, protein and wet gluten content, Zeleny sedimentation value and deformation energy. In order to achieve the minimal variation in responses, the optimal factor settings were found by minimizing the propagation of error (POE). The simultaneous optimization of factors was conducted by desirability function. The highest desirability of 87.63% was accomplished by setting up experimental conditions as follows: 19.9°C for sample temperature, 19.3°C for ambient temperature and 240V for instrument voltage. Copyright © 2014 Elsevier B.V. All rights reserved.
Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.
Menicucci, Nicolas C
2014-03-28
A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.
Advanced Flight Simulator: Utilization in A-10 Conversion and Air-to-Surface Attack Training.
1981-01-01
ASPT to the A-I0. Finally. the objectivity of the criteria ( parameters of aircraft control. bomb-drop circular error. and percentage of rounds through a...low angle strafe task. Table 4 presents a listing of these tasks and their related release parameters . 12 __ __ _ __ Tab/e .1. A/S Weapons I)eliven...Scoring. Two dependent variables, specific to the phase of student training, were used. In the conversion training phase. specific parameters for
Determining relative error bounds for the CVBEM
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.
NASA Technical Reports Server (NTRS)
Draper, John V.; Herndon, Joseph N.; Moore, Wendy E.
1987-01-01
Previous research on teleoperator force feedback is reviewed and results of a testing program which assessed the impact of force reflection on teleoperator task performance are reported. Force relection is a type of force feedback in which the forces acting on the remote portion of the teleoperator are displayed to the operator by back-driving the master controller. The testing program compared three force reflection levels: 4 to 1 (four units of force on the slave produce one unit of force at the master controller), 1 to 1, and infinity to 1 (no force reflection). Time required to complete tasks, rate of occurrence of errors, the maximum force applied to tasks components, and variability in forces applied to components during completion of representative remote handling tasks were used as dependent variables. Operators exhibited lower error rates, lower peak forces, and more consistent application of forces using force relection than they did without it. These data support the hypothesis that force reflection provides useful information for teleoperator users. The earlier literature and the results of the experiment are discussed in terms of their implications for space based teleoperator systems. The discussion described the impact of force reflection on task completion performance and task strategies, as suggested by the literature. It is important to understand the trade-offs involved in using telerobotic systems with and without force reflection.
SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER
NASA Technical Reports Server (NTRS)
Scotti, S. J.
1994-01-01
SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.
Packet-Based Protocol Efficiency for Aeronautical and Satellite Communications
NASA Technical Reports Server (NTRS)
Carek, David A.
2005-01-01
This paper examines the relation between bit error ratios and the effective link efficiency when transporting data with a packet-based protocol. Relations are developed to quantify the impact of a protocol s packet size and header size relative to the bit error ratio of the underlying link. These relations are examined in the context of radio transmissions that exhibit variable error conditions, such as those used in satellite, aeronautical, and other wireless networks. A comparison of two packet sizing methodologies is presented. From these relations, the true ability of a link to deliver user data, or information, is determined. Relations are developed to calculate the optimal protocol packet size forgiven link error characteristics. These relations could be useful in future research for developing an adaptive protocol layer. They can also be used for sizing protocols in the design of static links, where bit error ratios have small variability.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Do motivational incentives reduce the inhibition deficit in ADHD?
Shanahan, Michelle A; Pennington, Bruce F; Willcutt, Erik W
2008-01-01
The primary goal of this study was to test three competing theories of ADHD: the inhibition theory, the motivational theory, and a dual deficit theory. Previous studies have produced conflicting findings about the effects of incentives on executive processes in ADHD. In the present study of 25 children with ADHD and 30 typically developing controls, motivation was manipulated within the Stop Task. Stop signal reaction time was examined, as well as reaction time, its variability, and the number of errors in the primary choice reaction time task. Overall, the pattern of results supported the inhibition theory over the motivational or dual deficit hypotheses, as main effects of group were found for most key variables (ADHD group was worse), whereas the group by reward interaction predicted by the motivational and dual deficit accounts was not found. Hence, as predicted by the inhibition theory, children with ADHD performed worse than controls irrespective of incentives.
Wang, Ching-Yun; Song, Xiao
2016-11-01
Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women's Health Initiative. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Actuator stiction compensation via variable amplitude pulses.
Arifin, B M S; Munaro, C J; Angarita, O F B; Cypriano, M V G; Shah, S L
2018-02-01
A novel model free stiction compensation scheme is developed which eliminates the oscillations and also reduces valve movement, allowing good setpoint tracking and disturbance rejection. Pulses with varying amplitude are added to the controller output to overcome stiction and when the error becomes smaller than a specified limit, the compensation ceases and remains in a standby mode. The compensation re-starts as soon as the error exceeds the user specified threshold. The ability to cope with uncertainty in friction is a feature achieved by the use of pulses of varying amplitude. The algorithm has been evaluated via simulation and by application on an industrial DCS system interfaced to a pilot scale process with features identical to those found in industry including a valve positioner. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Novel user interface design for medication reconciliation: an evaluation of Twinlist.
Plaisant, Catherine; Wu, Johnny; Hettinger, A Zach; Powsner, Seth; Shneiderman, Ben
2015-03-01
The primary objective was to evaluate time, number of interface actions, and accuracy on medication reconciliation tasks using a novel user interface (Twinlist, which lays out the medications in five columns based on similarity and uses animation to introduce the grouping - www.cs.umd.edu/hcil/sharp/twinlist) compared to a Control interface (where medications are presented side by side in two columns). A secondary objective was to assess participant agreement with statements regarding clarity and utility and to elicit comparisons. A 1 × 2 within-subjects experimental design was used with interface (Twinlist or Control) as an independent variable; time, number of clicks, scrolls, and errors were used as dependent variables. Participants were practicing medical providers with experience performing medication reconciliation but no experience with Twinlist. They reconciled two cases in each interface (in a counterbalanced order), then provided feedback on the design of the interface. Twenty medical providers participated in the study for a total of 80 trials. The trials using Twinlist were statistically significantly faster (18%), with fewer clicks (40%) and scrolls (60%). Serious errors were noted 12 and 31 times in Twinlist and Control trials, respectively. Trials using Twinlist were faster and more accurate. Subjectively, participants rated Twinlist more favorably than Control. They valued the novel layout of the drugs, but indicated that the included animation would be valuable for novices, but not necessarily for advanced users. Additional feedback from participants provides guidance for further development and clinical implementations. Cognitive support of medication reconciliation through interface design can significantly improve performance and safety. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Liquid Medication Dosing Errors in Children: Role of Provider Counseling Strategies
Yin, H. Shonna; Dreyer, Benard P.; Moreira, Hannah A.; van Schaick, Linda; Rodriguez, Luis; Boettger, Susanne; Mendelsohn, Alan L.
2014-01-01
Objective To examine the degree to which recommended provider counseling strategies, including advanced communication techniques and dosing instrument provision, are associated with reductions in parent liquid medication dosing errors. Methods Cross-sectional analysis of baseline data on provider communication and dosing instrument provision from a study of a health literacy intervention to reduce medication errors. Parents whose children (<9 years) were seen in two urban public hospital pediatric emergency departments (EDs) and were prescribed daily dose liquid medications self-reported whether they received counseling about their child’s medication, including advanced strategies (teachback, drawings/pictures, demonstration, showback) and receipt of a dosing instrument. Primary dependent variable: observed dosing error (>20% deviation from prescribed). Multivariate logistic regression analyses performed, controlling for: parent age, language, country, ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease status; site. Results Of 287 parents, 41.1% made dosing errors. Advanced counseling and instrument provision in the ED were reported by 33.1% and 19.2%, respectively; 15.0% reported both. Advanced counseling and instrument provision in the ED were associated with decreased errors (30.5 vs. 46.4%, p=0.01; 21.8 vs. 45.7%, p=0.001). In adjusted analyses, ED advanced counseling in combination with instrument provision was associated with a decreased odds of error compared to receiving neither (AOR 0.3; 95% CI 0.1–0.7); advanced counseling alone and instrument alone were not significantly associated with odds of error. Conclusion Provider use of advanced counseling strategies and dosing instrument provision may be especially effective in reducing errors when used together. PMID:24767779
Liquid medication dosing errors in children: role of provider counseling strategies.
Yin, H Shonna; Dreyer, Benard P; Moreira, Hannah A; van Schaick, Linda; Rodriguez, Luis; Boettger, Susanne; Mendelsohn, Alan L
2014-01-01
To examine the degree to which recommended provider counseling strategies, including advanced communication techniques and dosing instrument provision, are associated with reductions in parent liquid medication dosing errors. Cross-sectional analysis of baseline data on provider communication and dosing instrument provision from a study of a health literacy intervention to reduce medication errors. Parents whose children (<9 years) were seen in 2 urban public hospital pediatric emergency departments (EDs) and were prescribed daily dose liquid medications self-reported whether they received counseling about their child's medication, including advanced strategies (teachback, drawings/pictures, demonstration, showback) and receipt of a dosing instrument. The primary dependent variable was observed dosing error (>20% deviation from prescribed). Multivariate logistic regression analyses were performed, controlling for parent age, language, country, ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease status; and site. Of 287 parents, 41.1% made dosing errors. Advanced counseling and instrument provision in the ED were reported by 33.1% and 19.2%, respectively; 15.0% reported both. Advanced counseling and instrument provision in the ED were associated with decreased errors (30.5 vs. 46.4%, P = .01; 21.8 vs. 45.7%, P = .001). In adjusted analyses, ED advanced counseling in combination with instrument provision was associated with a decreased odds of error compared to receiving neither (adjusted odds ratio 0.3; 95% confidence interval 0.1-0.7); advanced counseling alone and instrument alone were not significantly associated with odds of error. Provider use of advanced counseling strategies and dosing instrument provision may be especially effective in reducing errors when used together. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
A robust approach to chance constrained optimal power flow with renewable generation
Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.
2016-09-01
Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less
Sensitivity of Space Station alpha joint robust controller to structural modal parameter variations
NASA Technical Reports Server (NTRS)
Kumar, Renjith R.; Cooper, Paul A.; Lim, Tae W.
1991-01-01
The photovoltaic array sun tracking control system of Space Station Freedom is described. A synthesis procedure for determining optimized values of the design variables of the control system is developed using a constrained optimization technique. The synthesis is performed to provide a given level of stability margin, to achieve the most responsive tracking performance, and to meet other design requirements. Performance of the baseline design, which is synthesized using predicted structural characteristics, is discussed and the sensitivity of the stability margin is examined for variations of the frequencies, mode shapes and damping ratios of dominant structural modes. The design provides enough robustness to tolerate a sizeable error in the predicted modal parameters. A study was made of the sensitivity of performance indicators as the modal parameters of the dominant modes vary. The design variables are resynthesized for varying modal parameters in order to achieve the most responsive tracking performance while satisfying the design requirements. This procedure of reoptimization design parameters would be useful in improving the control system performance if accurate model data are provided.
Spacecraft attitude and velocity control system
NASA Technical Reports Server (NTRS)
Paluszek, Michael A. (Inventor); Piper, Jr., George E. (Inventor)
1992-01-01
A spacecraft attitude and/or velocity control system includes a controller which responds to at least attitude errors to produce command signals representing a force vector F and a torque vector T, each having three orthogonal components, which represent the forces and torques which are to be generated by the thrusters. The thrusters may include magnetic torquer or reaction wheels. Six difference equations are generated, three having the form ##EQU1## where a.sub.j is the maximum torque which the j.sup.th thruster can produce, b.sub.j is the maximum force which the j.sup.th thruster can produce, and .alpha..sub.j is a variable representing the throttling factor of the j.sup.th thruster, which may range from zero to unity. The six equations are summed to produce a single scalar equation relating variables .alpha..sub.j to a performance index Z: ##EQU2## Those values of .alpha. which maximize the value of Z are determined by a method for solving linear equations, such as a linear programming method. The Simplex method may be used. The values of .alpha..sub.j are applied to control the corresponding thrusters.
Functional, structural, and emotional correlates of impaired insight in cocaine addiction
Moeller, Scott J.; Konova, Anna B.; Parvaz, Muhammad A.; Tomasi, Dardo; Lane, Richard D.; Fort, Carolyn; Goldstein, Rita Z.
2014-01-01
Context Individuals with cocaine use disorder (CUD) have difficulty monitoring ongoing behavior, possibly stemming from dysfunction of brain regions subserving insight and self-awareness [e.g., anterior cingulate cortex (ACC)]. Objective To test the hypothesis that CUD with impaired insight (iCUD) would show abnormal (A) ACC activity during error processing, assessed with functional magnetic resonance imaging during a classic inhibitory control task; (B) ACC gray matter integrity assessed with voxel-based morphometry; and (C) awareness of one’s own emotional experiences, assessed with the Levels of Emotional Awareness Scale (LEAS). Using a previously validated probabilistic choice task, we grouped 33 CUD according to insight [iCUD: N=15; unimpaired insight CUD: N=18]; we also studied 20 healthy controls, all with unimpaired insight. Design Multimodal imaging design. Setting Clinical Research Center at Brookhaven National Laboratory. Participants Thirty-three CUD and 20 healthy controls. Main Outcome Measure Functional magnetic resonance imaging, voxel-based morphometry, LEAS, and drug use variables. Results Compared with the other two study groups, iCUD showed lower (A) error-induced rostral ACC (rACC) activity as associated with more frequent cocaine use; (B) gray matter within the rACC; and (C) LEAS scores. Conclusions These results point to rACC functional and structural abnormalities, and diminished emotional awareness, in a subpopulation of CUD characterized by impaired insight. Because the rACC has been implicated in appraising the affective/motivational significance of errors and other types of self-referential processing, functional and structural abnormalities in this region could result in lessened concern (frequently ascribed to minimization and denial) about behavioral outcomes that could potentially culminate in increased drug use. Treatments targeting this CUD subgroup could focus on enhancing the salience of errors (e.g., lapses). PMID:24258223
Recent Theoretical Advances in Analysis of AIRS/AMSU Sounding Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2007-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. This paper describes the AIRS Science Team Version 5.0 retrieval algorithm. Starting in early 2007, the Goddard DAAC will use this algorithm to analyze near real time AIRS/AMSU observations. These products are then made available to the scientific community for research purposes. The products include twice daily measurements of the Earth's three dimensional global temperature, water vapor, and ozone distribution as well as cloud cover. In addition, accurate twice daily measurements of the earth's land and ocean temperatures are derived and reported. Scientists use this important set of observations for two major applications. They provide important information for climate studies of global and regional variability and trends of different aspects of the earth's atmosphere. They also provide information for researchers to improve the skill of weather forecasting. A very important new product of the AIRS Version 5 algorithm is accurate case-by-case error estimates of the retrieved products. This heightens their utility for use in both weather and climate applications. These error estimates are also used directly for quality control of the retrieved products. Version 5 also allows for accurate quality controlled AIRS only retrievals, called "Version 5 AO retrievals" which can be used as a backup methodology if AMSU fails. Examples of the accuracy of error estimates and quality controlled retrieval products of the AIRS/AMSU Version 5 and Version 5 AO algorithms are given, and shown to be significantly better than the previously used Version 4 algorithm. Assimilation of Version 5 retrievals are also shown to significantly improve forecast skill, especially when the case-by-case error estimates are utilized in the data assimilation process.
Robust model predictive control for optimal continuous drug administration.
Sopasakis, Pantelis; Patrinos, Panagiotis; Sarimveis, Haralambos
2014-10-01
In this paper the model predictive control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular, for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a discrete-time state-space model. Only plasma measurements are assumed to be measured on-line. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modelling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting and proves successful in presence of modelling errors and inaccurate measurements. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Enescu (Balaş, M. L.; Alexandru, C.
2016-08-01
The paper deals with the optimal design of the control system for a 6-DOF robot used in thin layers deposition. The optimization is based on parametric technique, by modelling the design objective as a numerical function, and then establishing the optimal values of the design variables so that to minimize the objective function. The robotic system is a mechatronic product, which integrates the mechanical device and the controlled operating device.The mechanical device of the robot was designed in the CAD (Computer Aided Design) software CATIA, the 3D-model being then transferred to the MBS (Multi-Body Systems) environment ADAMS/View. The control system was developed in the concurrent engineering concept, through the integration with the MBS mechanical model, by using the DFC (Design for Control) software solution EASY5. The necessary angular motions in the six joints of the robot, in order to obtain the imposed trajectory of the end-effector, have been established by performing the inverse kinematic analysis. The positioning error in each joint of the robot is used as design objective, the optimization goal being to minimize the root mean square during simulation, which is a measure of the magnitude of the positioning error varying quantity.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
Tsukeoka, Tadashi; Tsuneizumi, Yoshikazu; Yoshino, Kensuke; Suzuki, Mashiko
2018-05-01
The aim of this study was to determine factors that contribute to bone cutting errors of conventional instrumentation for tibial resection in total knee arthroplasty (TKA) as assessed by an image-free navigation system. The hypothesis is that preoperative varus alignment is a significant contributory factor to tibial bone cutting errors. This was a prospective study of a consecutive series of 72 TKAs. The amount of the tibial first-cut errors with reference to the planned cutting plane in both coronal and sagittal planes was measured by an image-free computer navigation system. Multiple regression models were developed with the amount of tibial cutting error in the coronal and sagittal planes as dependent variables and sex, age, disease, height, body mass index, preoperative alignment, patellar height (Insall-Salvati ratio) and preoperative flexion angle as independent variables. Multiple regression analysis showed that sex (male gender) (R = 0.25 p = 0.047) and preoperative varus alignment (R = 0.42, p = 0.001) were positively associated with varus tibial cutting errors in the coronal plane. In the sagittal plane, none of the independent variables was significant. When performing TKA in varus deformity, careful confirmation of the bone cutting surface should be performed to avoid varus alignment. The results of this study suggest technical considerations that can help a surgeon achieve more accurate component placement. IV.
Durand, Casey P
2013-01-01
Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2013-07-01
Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Tzetzis, George; Votsis, Evandros; Kourtessis, Thomas
2008-01-01
This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty). Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures) with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective. Key pointsThe type of the skill is a critical factor in determining the effectiveness of the feedback types.Different instructional methods of corrective feedback could have beneficial effects in the outcome and self-confidence of young athletesInstructions focusing on the correct cues or errors increase performance of easy skills.Positive feedback or correction cues increase self-confidence of easy skills but only the combination of error and correction cues increase self confidence and outcome scores of difficult skills. PMID:24149905
NASA Astrophysics Data System (ADS)
Lamouroux, Julien; Testut, Charles-Emmanuel; Lellouche, Jean-Michel; Perruche, Coralie; Paul, Julien
2017-04-01
The operational production of data-assimilated biogeochemical state of the ocean is one of the challenging core projects of the Copernicus Marine Environment Monitoring Service. In that framework - and with the April 2018 CMEMS V4 release as a target - Mercator Ocean is in charge of improving the realism of its global ¼° BIOMER coupled physical-biogeochemical (NEMO/PISCES) simulations, analyses and re-analyses, and to develop an effective capacity to routinely estimate the biogeochemical state of the ocean, through the implementation of biogeochemical data assimilation. Primary objectives are to enhance the time representation of the seasonal cycle in the real time and reanalysis systems, and to provide a better control of the production in the equatorial regions. The assimilation of BGC data will rely on a simplified version of the SEEK filter, where the error statistics do not evolve with the model dynamics. The associated forecast error covariances are based on the statistics of a collection of 3D ocean state anomalies. The anomalies are computed from a multi-year numerical experiment (free run without assimilation) with respect to a running mean in order to estimate the 7-day scale error on the ocean state at a given period of the year. These forecast error covariances rely thus on a fixed-basis seasonally variable ensemble of anomalies. This methodology, which is currently implemented in the "blue" component of the CMEMS operational forecast system, is now under adaptation to be applied to the biogeochemical part of the operational system. Regarding observations - and as a first step - the system shall rely on the CMEMS GlobColour Global Ocean surface chlorophyll concentration products, delivered in NRT. The objective of this poster is to provide a detailed overview of the implementation of the aforementioned data assimilation methodology in the CMEMS BIOMER forecasting system. Focus shall be put on (1) the assessment of the capabilities of this data assimilation methodology to provide satisfying statistics of the model variability errors (through space-time analysis of dedicated representers of satellite surface Chla observations), (2) the dedicated features of the data assimilation configuration that have been implemented so far (e.g. log-transformation of the analysis state, multivariate Chlorophyll-Nutrient control vector, etc.) and (3) the assessment of the performances of this future operational data assimilation configuration.
Error Estimation of Pathfinder Version 5.3 SST Level 3C Using Three-way Error Analysis
NASA Astrophysics Data System (ADS)
Saha, K.; Dash, P.; Zhao, X.; Zhang, H. M.
2017-12-01
One of the essential climate variables for monitoring as well as detecting and attributing climate change, is Sea Surface Temperature (SST). A long-term record of global SSTs are available with observations obtained from ships in the early days to the more modern observation based on in-situ as well as space-based sensors (satellite/aircraft). There are inaccuracies associated with satellite derived SSTs which can be attributed to the errors associated with spacecraft navigation, sensor calibrations, sensor noise, retrieval algorithms, and leakages due to residual clouds. Thus it is important to estimate accurate errors in satellite derived SST products to have desired results in its applications.Generally for validation purposes satellite derived SST products are compared against the in-situ SSTs which have inaccuracies due to spatio/temporal inhomogeneity between in-situ and satellite measurements. A standard deviation in their difference fields usually have contributions from both satellite as well as the in-situ measurements. A real validation of any geophysical variable must require the knowledge of the "true" value of the said variable. Therefore a one-to-one comparison of satellite based SST with in-situ data does not truly provide us the real error in the satellite SST and there will be ambiguity due to errors in the in-situ measurements and their collocation differences. A Triple collocation (TC) or three-way error analysis using 3 mutually independent error-prone measurements, can be used to estimate root-mean square error (RMSE) associated with each of the measurements with high level of accuracy without treating any one system a perfectly-observed "truth". In this study we are estimating the absolute random errors associated with Pathfinder Version 5.3 Level-3C SST product Climate Data record. Along with the in-situ SST data, the third source of dataset used for this analysis is the AATSR reprocessing of climate (ARC) dataset for the corresponding period. All three SST observations are collocated, and statistics of difference between each pair is estimated. Instead of using a traditional TC analysis we have implemented the Extended Triple Collocation (ETC) approach to estimate the correlation coefficient of each measurement system w.r.t. the unknown target variable along with their RMSE.
Poon, Cynthia; Coombes, Stephen A.; Corcos, Daniel M.; Christou, Evangelos A.
2013-01-01
When subjects perform a learned motor task with increased visual gain, error and variability are reduced. Neuroimaging studies have identified a corresponding increase in activity in parietal cortex, premotor cortex, primary motor cortex, and extrastriate visual cortex. Much less is understood about the neural processes that underlie the immediate transition from low to high visual gain within a trial. This study used 128-channel electroencephalography to measure cortical activity during a visually guided precision grip task, in which the gain of the visual display was changed during the task. Force variability during the transition from low to high visual gain was characterized by an inverted U-shape, whereas force error decreased from low to high gain. Source analysis identified cortical activity in the same structures previously identified using functional magnetic resonance imaging. Source analysis also identified a time-varying shift in the strongest source activity. Superior regions of the motor and parietal cortex had stronger source activity from 300 to 600 ms after the transition, whereas inferior regions of the extrastriate visual cortex had stronger source activity from 500 to 700 ms after the transition. Force variability and electrical activity were linearly related, with a positive relation in the parietal cortex and a negative relation in the frontal cortex. Force error was nonlinearly related to electrical activity in the parietal cortex and frontal cortex by a quadratic function. This is the first evidence that force variability and force error are systematically related to a time-varying shift in cortical activity in frontal and parietal cortex in response to enhanced visual gain. PMID:23365186
Cheng, Sen; Sabes, Philip N
2007-04-01
The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.
Machine learning enhanced optical distance sensor
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, N. A.
2018-01-01
Presented for the first time is a machine learning enhanced optical distance sensor. The distance sensor is based on our previously demonstrated distance measurement technique that uses an Electronically Controlled Variable Focus Lens (ECVFL) with a laser source to illuminate a target plane with a controlled optical beam spot. This spot with varying spot sizes is viewed by an off-axis camera and the spot size data is processed to compute the distance. In particular, proposed and demonstrated in this paper is the use of a regularized polynomial regression based supervised machine learning algorithm to enhance the accuracy of the operational sensor. The algorithm uses the acquired features and corresponding labels that are the actual target distance values to train a machine learning model. The optimized training model is trained over a 1000 mm (or 1 m) experimental target distance range. Using the machine learning algorithm produces a training set and testing set distance measurement errors of <0.8 mm and <2.2 mm, respectively. The test measurement error is at least a factor of 4 improvement over our prior sensor demonstration without the use of machine learning. Applications for the proposed sensor include industrial scenario distance sensing where target material specific training models can be generated to realize low <1% measurement error distance measurements.
Studies in automatic speech recognition and its application in aerospace
NASA Astrophysics Data System (ADS)
Taylor, Michael Robinson
Human communication is characterized in terms of the spectral and temporal dimensions of speech waveforms. Electronic speech recognition strategies based on Dynamic Time Warping and Markov Model algorithms are described and typical digit recognition error rates are tabulated. The application of Direct Voice Input (DVI) as an interface between man and machine is explored within the context of civil and military aerospace programmes. Sources of physical and emotional stress affecting speech production within military high performance aircraft are identified. Experimental results are reported which quantify fundamental frequency and coarse temporal dimensions of male speech as a function of the vibration, linear acceleration and noise levels typical of aerospace environments; preliminary indications of acoustic phonetic variability reported by other researchers are summarized. Connected whole-word pattern recognition error rates are presented for digits spoken under controlled Gz sinusoidal whole-body vibration. Correlations are made between significant increases in recognition error rate and resonance of the abdomen-thorax and head subsystems of the body. The phenomenon of vibrato style speech produced under low frequency whole-body Gz vibration is also examined. Interactive DVI system architectures and avionic data bus integration concepts are outlined together with design procedures for the efficient development of pilot-vehicle command and control protocols.
Spatial regression test for ensuring temperature data quality in southern Spain
NASA Astrophysics Data System (ADS)
Estévez, J.; Gavilán, P.; García-Marín, A. P.
2018-01-01
Quality assurance of meteorological data is crucial for ensuring the reliability of applications and models that use such data as input variables, especially in the field of environmental sciences. Spatial validation of meteorological data is based on the application of quality control procedures using data from neighbouring stations to assess the validity of data from a candidate station (the station of interest). These kinds of tests, which are referred to in the literature as spatial consistency tests, take data from neighbouring stations in order to estimate the corresponding measurement at the candidate station. These estimations can be made by weighting values according to the distance between the stations or to the coefficient of correlation, among other methods. The test applied in this study relies on statistical decision-making and uses a weighting based on the standard error of the estimate. This paper summarizes the results of the application of this test to maximum, minimum and mean temperature data from the Agroclimatic Information Network of Andalusia (southern Spain). This quality control procedure includes a decision based on a factor f, the fraction of potential outliers for each station across the region. Using GIS techniques, the geographic distribution of the errors detected has been also analysed. Finally, the performance of the test was assessed by evaluating its effectiveness in detecting known errors.
NASA Astrophysics Data System (ADS)
He, Bin; Frey, Eric C.
2010-06-01
Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were linear in the shift for both the QSPECT and QPlanar methods. QPlanar was less sensitive to object definition perturbations than QSPECT, especially for dilation and erosion cases. Up to 1 voxel misregistration or misdefinition resulted in up to 8% error in organ activity estimates, with the largest errors for small or low uptake organs. Both types of VOI definition errors produced larger errors in activity estimates for a small and low uptake organs (i.e. -7.5% to 5.3% for the left kidney) than for a large and high uptake organ (i.e. -2.9% to 2.1% for the liver). We observed that misregistration generally had larger effects than misdefinition, with errors ranging from -7.2% to 8.4%. The different imaging methods evaluated responded differently to the errors from misregistration and misdefinition. We found that QSPECT was more sensitive to misdefinition errors, but less sensitive to misregistration errors, as compared to the QPlanar method. Thus, sensitivity to VOI definition errors should be an important criterion in evaluating quantitative imaging methods.
The Public Understanding of Error in Educational Assessment
ERIC Educational Resources Information Center
Gardner, John
2013-01-01
Evidence from recent research suggests that in the UK the public perception of errors in national examinations is that they are simply mistakes; events that are preventable. This perception predominates over the more sophisticated technical view that errors arise from many sources and create an inevitable variability in assessment outcomes. The…
A non-linear data mining parameter selection algorithm for continuous variables
Razavi, Marianne; Brady, Sean
2017-01-01
In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829
Inferior frontal cortex activity is modulated by reward sensitivity and performance variability.
Fuentes-Claramonte, Paola; Ávila, César; Rodríguez-Pujadas, Aina; Costumero, Víctor; Ventura-Campos, Noelia; Bustamante, Juan Carlos; Rosell-Negre, Patricia; Barrós-Loscertales, Alfonso
2016-02-01
High reward sensitivity has been linked with motivational and cognitive disorders related with prefrontal and striatal brain function during inhibitory control. However, few studies have analyzed the interaction among reward sensitivity, task performance and neural activity. Participants (N=57) underwent fMRI while performing a Go/No-go task with Frequent-go (77.5%), Infrequent-go (11.25%) and No-go (11.25%) stimuli. Task-associated activity was found in inhibition-related brain regions, with different activity patterns for right and left inferior frontal gyri (IFG): right IFG responded more strongly to No-go stimuli, while left IFG responded similarly to all infrequent stimuli. Reward sensitivity correlated with omission errors in Go trials and reaction time (RT) variability, and with increased activity in right and left IFG for No-go and Infrequent-go stimuli compared with Frequent-go. Bilateral IFG activity was associated with RT variability, with reward sensitivity mediating this association. These results suggest that reward sensitivity modulates behavior and brain function during executive control. Copyright © 2016 Elsevier B.V. All rights reserved.
Microscopic saw mark analysis: an empirical approach.
Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles
2015-01-01
Microscopic saw mark analysis is a well published and generally accepted qualitative analytical method. However, little research has focused on identifying and mitigating potential sources of error associated with the method. The presented study proposes the use of classification trees and random forest classifiers as an optimal, statistically sound approach to mitigate the potential for error of variability and outcome error in microscopic saw mark analysis. The statistical model was applied to 58 experimental saw marks created with four types of saws. The saw marks were made in fresh human femurs obtained through anatomical gift and were analyzed using a Keyence digital microscope. The statistical approach weighed the variables based on discriminatory value and produced decision trees with an associated outcome error rate of 8.62-17.82%. © 2014 American Academy of Forensic Sciences.
Interpreting SBUV Smoothing Errors: an Example Using the Quasi-biennial Oscillation
NASA Technical Reports Server (NTRS)
Kramarova, N. A.; Bhartia, Pawan K.; Frith, S. M.; McPeters, R. D.; Stolarski, R. S.
2013-01-01
The Solar Backscattered Ultraviolet (SBUV) observing system consists of a series of instruments that have been measuring both total ozone and the ozone profile since 1970. SBUV measures the profile in the upper stratosphere with a resolution that is adequate to resolve most of the important features of that region. In the lower stratosphere the limited vertical resolution of the SBUV system means that there are components of the profile variability that SBUV cannot measure. The smoothing error, as defined in the optimal estimation retrieval method, describes the components of the profile variability that the SBUV observing system cannot measure. In this paper we provide a simple visual interpretation of the SBUV smoothing error by comparing SBUV ozone anomalies in the lower tropical stratosphere associated with the quasi-biennial oscillation (QBO) to anomalies obtained from the Aura Microwave Limb Sounder (MLS). We describe a methodology for estimating the SBUV smoothing error for monthly zonal mean (mzm) profiles. We construct covariance matrices that describe the statistics of the inter-annual ozone variability using a 6 yr record of Aura MLS and ozonesonde data. We find that the smoothing error is of the order of 1percent between 10 and 1 hPa, increasing up to 15-20 percent in the troposphere and up to 5 percent in the mesosphere. The smoothing error for total ozone columns is small, mostly less than 0.5 percent. We demonstrate that by merging the partial ozone columns from several layers in the lower stratosphere/troposphere into one thick layer, we can minimize the smoothing error. We recommend using the following layer combinations to reduce the smoothing error to about 1 percent: surface to 25 hPa (16 hPa) outside (inside) of the narrow equatorial zone 20 S-20 N.
Asymmetric Fuzzy Control of a Positive and Negative Pneumatic Pressure Servo System
NASA Astrophysics Data System (ADS)
Yang, Gang; Du, Jing-Min; Fu, Xiao-Yun; Li, Bao-Ren
2017-11-01
The pneumatic pressure control systems have been used in some fields. However, the researches on pneumatic pressure control mainly focus on constant pressure regulation. Poor dynamic characteristics and strong nonlinearity of such systems limit its application in the field of pressure tracking control. In order to meet the demand of generating dynamic pressure signal in the application of the hardware-in-the-loop simulation of aerospace engineering, a positive and negative pneumatic pressure servo system is provided to implement dynamic adjustment of sealed chamber pressure. A mathematical model is established with simulation and experiment being implemented afterwards to discuss the characteristics of the system, which shows serious asymmetry in the process of charging and discharging. Based on the analysis of the system dynamics, a fuzzy proportional integral derivative (PID) controller with asymmetric fuzzy compensator is proposed. Different from conventional adjusting mechanisms employing the error and change in error of the controlled variable as input parameters, the current chamber pressure and charging or discharging state are chosen as inputs of the compensator, which improves adaptability. To verify the effectiveness and performance of the proposed controller, the comparison experiments tracking sinusoidal and square wave commands are conducted. Experimental results show that the proposed controller can obtain better dynamic performance and relatively consistent control performance across the scope of work (2-140 kPa). The research proposes a fuzzy control method to overcome asymmetry and enhance adaptability for the positive and negative pneumatic pressure servo system.
Daoud, Salima; Chakroun-Feki, Nozha; Sellami, Afifa; Ammar-Keskes, Leila; Rebai, Tarek
2016-01-01
Semen analysis is a key part of male infertility investigation. The necessity of quality management implementation in the andrology laboratory has been recognized in order to ensure the reliability of its results. The aim of this study was to evaluate intra- and inter-individual variability in the assessment of semen parameters in our laboratory through a quality control programme. Four participants from the laboratory with different experience levels have participated in this study. Semen samples of varying quality were assessed for sperm motility, concentration and morphology and the results were used to evaluate inter-participant variability. In addition, replicates of each semen sample were analyzed to determine intra-individual variability for semen parameters analysis. The average values of inter-participant coefficients of variation for sperm motility, concentration and morphology were 12.8%, 19.8% and 48.9% respectively. The mean intra-participant coefficients of variation were, respectively, 6.9%, 12.3% and 42.7% for sperm motility, concentration and morphology. Despite some random errors of under- or overestimation, the overall results remained within the limits of acceptability for all participants. Sperm morphology assessment was particularly influenced by the participant's level of experience. The present data emphasize the need for appropriate training of the laboratory staff and for regular participation in internal quality control programmes in order to improve the reliability of laboratory results.
Mumford, Jeanette A.
2017-01-01
Even after thorough preprocessing and a careful time series analysis of functional magnetic resonance imaging (fMRI) data, artifact and other issues can lead to violations of the assumption that the variance is constant across subjects in the group level model. This is especially concerning when modeling a continuous covariate at the group level, as the slope is easily biased by outliers. Various models have been proposed to deal with outliers including models that use the first level variance or that use the group level residual magnitude to differentially weight subjects. The most typically used robust regression, implementing a robust estimator of the regression slope, has been previously studied in the context of fMRI studies and was found to perform well in some scenarios, but a loss of Type I error control can occur for some outlier settings. A second type of robust regression using a heteroscedastic autocorrelation consistent (HAC) estimator, which produces robust slope and variance estimates has been shown to perform well, with better Type I error control, but with large sample sizes (500–1000 subjects). The Type I error control with smaller sample sizes has not been studied in this model and has not been compared to other modeling approaches that handle outliers such as FSL’s Flame 1 and FSL’s outlier de-weighting. Focusing on group level inference with a continuous covariate over a range of sample sizes and degree of heteroscedasticity, which can be driven either by the within- or between-subject variability, both styles of robust regression are compared to ordinary least squares (OLS), FSL’s Flame 1, Flame 1 with outlier de-weighting algorithm and Kendall’s Tau. Additionally, subject omission using the Cook’s Distance measure with OLS and nonparametric inference with the OLS statistic are studied. Pros and cons of these models as well as general strategies for detecting outliers in data and taking precaution to avoid inflated Type I error rates are discussed. PMID:28030782
NASA Astrophysics Data System (ADS)
Sahu, Neelesh Kumar; Andhare, Atul B.; Andhale, Sandip; Raju Abraham, Roja
2018-04-01
Present work deals with prediction of surface roughness using cutting parameters along with in-process measured cutting force and tool vibration (acceleration) during turning of Ti-6Al-4V with cubic boron nitride (CBN) inserts. Full factorial design is used for design of experiments using cutting speed, feed rate and depth of cut as design variables. Prediction model for surface roughness is developed using response surface methodology with cutting speed, feed rate, depth of cut, resultant cutting force and acceleration as control variables. Analysis of variance (ANOVA) is performed to find out significant terms in the model. Insignificant terms are removed after performing statistical test using backward elimination approach. Effect of each control variables on surface roughness is also studied. Correlation coefficient (R2 pred) of 99.4% shows that model correctly explains the experiment results and it behaves well even when adjustment is made in factors or new factors are added or eliminated. Validation of model is done with five fresh experiments and measured forces and acceleration values. Average absolute error between RSM model and experimental measured surface roughness is found to be 10.2%. Additionally, an artificial neural network model is also developed for prediction of surface roughness. The prediction results of modified regression model are compared with ANN. It is found that RSM model and ANN (average absolute error 7.5%) are predicting roughness with more than 90% accuracy. From the results obtained it is found that including cutting force and vibration for prediction of surface roughness gives better prediction than considering only cutting parameters. Also, ANN gives better prediction over RSM models.
Bonaretti, Serena; Vilayphiou, Nicolas; Chan, Caroline Mai; Yu, Andrew; Nishiyama, Kyle; Liu, Danmei; Boutroy, Stephanie; Ghasem-Zadeh, Ali; Boyd, Steven K.; Chapurlat, Roland; McKay, Heather; Shane, Elizabeth; Bouxsein, Mary L.; Black, Dennis M.; Majumdar, Sharmila; Orwoll, Eric S.; Lang, Thomas F.; Khosla, Sundeep; Burghardt, Andrew J.
2017-01-01
Introduction HR-pQCT is increasingly used to assess bone quality, fracture risk and anti-fracture interventions. The contribution of the operator has not been adequately accounted in measurement precision. Operators acquire a 2D projection (“scout view image”) and define the region to be scanned by positioning a “reference line” on a standard anatomical landmark. In this study, we (i) evaluated the contribution of positioning variability to in vivo measurement precision, (ii) measured intra- and inter-operator positioning variability, and (iii) tested if custom training software led to superior reproducibility in new operators compared to experienced operators. Methods To evaluate the operator in vivo measurement precision we compared precision errors calculated in 64 co-registered and non-co-registered scan-rescan images. To quantify operator variability, we developed software that simulates the positioning process of the scanner’s software. Eight experienced operators positioned reference lines on scout view images designed to test intra- and inter-operator reproducibility. Finally, we developed modules for training and evaluation of reference line positioning. We enrolled 6 new operators to participate in a common training, followed by the same reproducibility experiments performed by the experienced group. Results In vivo precision errors were up to three-fold greater (Tt.BMD and Ct.Th) when variability in scan positioning was included. Inter-operator precision errors were significantly greater than short-term intra-operator precision (p<0.001). New trained operators achieved comparable intra-operator reproducibility to experienced operators, and lower inter-operator reproducibility (p<0.001). Precision errors were significantly greater for the radius than for the tibia. Conclusion Operator reference line positioning contributes significantly to in vivo measurement precision and is significantly greater for multi-operator datasets. Inter-operator variability can be significantly reduced using a systematic training platform, now available online (http://webapps.radiology.ucsf.edu/refline/). PMID:27475931
Amiri Arimi, Somayeh; Ghamkhar, Leila; Kahlaee, Amir H
2018-01-02
Impairment in the cervical proprioception and deep flexor muscle function and morphology have been regarded to be associated with chronic neck pain (CNP). The aim of the study is to assess the relationship between proprioception and flexor endurance capacity and size and clinical CNP characteristics. This was an observational, cross-sectional study. Rehabilitation hospital laboratory. Sixty subjects with or without CNP participated in the study. Joint position error, clinical deep flexor endurance test score, longus colli/capitis and sternocleidomastoid muscle size, pain intensity, neck pain-related disability, and fear of movement were assessed. Multivariate analysis of variance and Pearson correlation tests were used to compare the groups and quantify the strength of the associations among variables, respectively. Logistic regression analysis was performed to test the predictive value of the dependent variables for the development of neck pain. CNP patients showed lower flexor endurance (P = 0.01) and smaller longus colli size (P < 0.01). The joint position error was not statistically different between the groups. Longus colli size was correlated with local flexor endurance in both CNP (P = 0.01) and control (P = 0.04) groups. Among clinical CNP characteristics, kinesiophobia showed fair correlation with joint position error (r = 0.39, P = 0.03). Left rotation error and local flexor endurance were significant predictors of CNP development (β = 1.22, P = 0.02, and β = 0.97, P = 0.02, respectively). The results indicated that cervical proprioception was associated neither with deep flexor muscle structure/function nor with clinical CNP characteristics. Left rotation error and local flexor endurance were found relevant to neck pain development. © 2017 American Academy of Pain Medicine. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Method for controlling a vehicle with two or more independently steered wheels
Reister, David B.; Unseren, Michael A.
1995-01-01
A method (10) for independently controlling each steerable drive wheel (W.sub.i) of a vehicle with two or more such wheels (W.sub.i). An instantaneous center of rotation target (ICR) and a tangential velocity target (v.sup.G) are inputs to a wheel target system (30) which sends the velocity target (v.sub.i.sup.G) and a steering angle target (.theta..sub.i.sup.G) for each drive wheel (W.sub.i) to a pseudovelocity target system (32). The pseudovelocity target system (32) determines a pseudovelocity target (v.sub.P.sup.G) which is compared to a current pseudovelocity (v.sub.P.sup.m) to determine a pseudovelocity error (.epsilon.). The steering angle targets (.theta..sup.G) and the steering angles (.theta..sup.m) are inputs to a steering angle control system (34) which outputs to the steering angle encoders (36), which measure the steering angles (.theta..sup.m). The pseudovelocity error (.epsilon.), the rate of change of the pseudovelocity error ( ), and the wheel slip between each pair of drive wheels (W.sub.i) are used to calculate intermediate control variables which, along with the steering angle targets (.theta..sup.G) are used to calculate the torque to be applied at each wheel (W.sub.i). The current distance traveled for each wheel (W.sub.i) is then calculated. The current wheel velocities (v.sup.m) and steering angle targets (.theta..sup.G) are used to calculate the cumulative and instantaneous wheel slip (e, ) and the current pseudovelocity (v.sub.P.sup.m).
The Impact of Soil Sampling Errors on Variable Rate Fertilization
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. L. Hoskinson; R C. Rope; L G. Blackwood
2004-07-01
Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and amore » predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.« less
Rapid Detection of Volatile Oil in Mentha haplocalyx by Near-Infrared Spectroscopy and Chemometrics.
Yan, Hui; Guo, Cheng; Shao, Yang; Ouyang, Zhen
2017-01-01
Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . The effects of data pre-processing methods on the accuracy of the PLSR calibration models were investigated. The performance of the final model was evaluated according to the correlation coefficient ( R ) and root mean square error of prediction (RMSEP). For PLSR model, the best preprocessing method combination was first-order derivative, standard normal variate transformation (SNV), and mean centering, which had of 0.8805, of 0.8719, RMSEC of 0.091, and RMSEP of 0.097, respectively. The wave number variables linking to volatile oil are from 5500 to 4000 cm-1 by analyzing the loading weights and variable importance in projection (VIP) scores. For SVM model, six LVs (less than seven LVs in PLSR model) were adopted in model, and the result was better than PLSR model. The and were 0.9232 and 0.9202, respectively, with RMSEC and RMSEP of 0.084 and 0.082, respectively, which indicated that the predicted values were accurate and reliable. This work demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in M. haplocalyx . The quality of medicine directly links to clinical efficacy, thus, it is important to control the quality of Mentha haplocalyx . Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . For SVM model, 6 LVs (less than 7 LVs in PLSR model) were adopted in model, and the result was better than PLSR model. It demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in Mentha haplocalyx . Abbreviations used: 1 st der: First-order derivative; 2 nd der: Second-order derivative; LOO: Leave-one-out; LVs: Latent variables; MC: Mean centering, NIR: Near-infrared; NIRS: Near infrared spectroscopy; PCR: Principal component regression, PLSR: Partial least squares regression; RBF: Radial basis function; RMSEC: Root mean square error of cross validation, RMSEC: Root mean square error of calibration; RMSEP: Root mean square error of prediction; SNV: Standard normal variate transformation; SVM: Support vector machine; VIP: Variable Importance in projection.
Li, Shanzhi; Wang, Haoping; Tian, Yang; Aitouch, Abdel; Klein, John
2016-09-01
This paper presents an intelligent proportional-integral sliding mode control (iPISMC) for direct power control of variable speed-constant frequency wind turbine system. This approach deals with optimal power production (in the maximum power point tracking sense) under several disturbance factors such as turbulent wind. This controller is made of two sub-components: (i) an intelligent proportional-integral module for online disturbance compensation and (ii) a sliding mode module for circumventing disturbance estimation errors. This iPISMC method has been tested on FAST/Simulink platform of a 5MW wind turbine system. The obtained results demonstrate that the proposed iPISMC method outperforms the classical PI and intelligent proportional-integral control (iPI) in terms of both active power and response time. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Yu, Zhaoxu; Li, Shugang; Yu, Zhaosheng; Li, Fangfei
2018-04-01
This paper investigates the problem of output feedback adaptive stabilization for a class of nonstrict-feedback stochastic nonlinear systems with both unknown backlashlike hysteresis and unknown control directions. A new linear state transformation is applied to the original system, and then, control design for the new system becomes feasible. By combining the neural network's (NN's) parameterization, variable separation technique, and Nussbaum gain function method, an input-driven observer-based adaptive NN control scheme, which involves only one parameter to be updated, is developed for such systems. All closed-loop signals are bounded in probability and the error signals remain semiglobally bounded in the fourth moment (or mean square). Finally, the effectiveness and the applicability of the proposed control design are verified by two simulation examples.
NASA Technical Reports Server (NTRS)
Pauwels, V. R. N.; DeLannoy, G. J. M.; Hendricks Franssen, H.-J.; Vereecken, H.
2013-01-01
In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.
Intelligent Engine Systems: Adaptive Control
NASA Technical Reports Server (NTRS)
Gibson, Nathan
2008-01-01
We have studied the application of the baseline Model Predictive Control (MPC) algorithm to the control of main fuel flow rate (WF36), variable bleed valve (AE24) and variable stator vane (STP25) control of a simulated high-bypass turbofan engine. Using reference trajectories for thrust and turbine inlet temperature (T41) generated by a simulated new engine, we have examined MPC for tracking these two reference outputs while controlling a deteriorated engine. We have examined the results of MPC control for six different transients: two idle-to-takeoff transients at sea level static (SLS) conditions, one takeoff-to-idle transient at SLS, a Bode power command and reverse Bode power command at 20,000 ft/Mach 0.5, and a reverse Bode transient at 35,000 ft/Mach 0.84. For all cases, our primary focus was on the computational effort required by MPC for varying MPC update rates, control horizons, and prediction horizons. We have also considered the effects of these MPC parameters on the performance of the control, with special emphasis on the thrust tracking error, the peak T41, and the sizes of violations of the constraints on the problem, primarily the booster stall margin limit, which for most cases is the lone constraint that is violated with any frequency.
Multiple Cognitive Control Effects of Error Likelihood and Conflict
Brown, Joshua W.
2010-01-01
Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873
New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.
Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María
2017-08-01
In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, L.; Hill, W.J.
A method is proposed to estimate the effect of long-term variations in total ozone on the error incurred in determining a trend in total ozone due to man-made effects. When this method is applied to data from Arosa, Switzerland over the years 1932--1980, a component of the standard error of the trend estimate equal to 0.6 percent per decade is obtained. If this estimate of long-term trend variability at Arosa is not too different from global long-term trend variability, then the threshold ( +- 2 standard errors) for detecting an ozone trend in the 1970's that is outside of whatmore » could be expected from natural variation alone and hence be man-made would range from 1.35% (Reinsel et al, 1981) to 1.8%. The latter value is obtained by combining the Reinsel et al result with the result here, assuming that the error variations that both studies measure are independent and additive. Estimates for long-term trend variation over other time periods are also derived. Simulations that measure the precision of the estimate of long-term variability are reported.« less