Sample records for initial condition error

  1. Ocean Data Impacts in Global HYCOM

    DTIC Science & Technology

    2014-08-01

    The purpose of assimilation is to reduce the model initial condition error. Improved initial con- ditions should lead to an improved forecast...the determination of locations where forecast errors are sensitive to the initial conditions are essential for improving the data assimilation system...longwave radiation, total (large scale plus convective) precipitation, ground/sea temperature, zonal and me- ridional wind velocities at 10m, mean sea

  2. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  3. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  4. Initializing a Mesoscale Boundary-Layer Model with Radiosonde Observations

    NASA Astrophysics Data System (ADS)

    Berri, Guillermo J.; Bertossa, Germán

    2018-01-01

    A mesoscale boundary-layer model is used to simulate low-level regional wind fields over the La Plata River of South America, a region characterized by a strong daily cycle of land-river surface-temperature contrast and low-level circulations of sea-land breeze type. The initial and boundary conditions are defined from a limited number of local observations and the upper boundary condition is taken from the only radiosonde observations available in the region. The study considers 14 different upper boundary conditions defined from the radiosonde data at standard levels, significant levels, level of the inversion base and interpolated levels at fixed heights, all of them within the first 1500 m. The period of analysis is 1994-2008 during which eight daily observations from 13 weather stations of the region are used to validate the 24-h surface-wind forecast. The model errors are defined as the root-mean-square of relative error in wind-direction frequency distribution and mean wind speed per wind sector. Wind-direction errors are greater than wind-speed errors and show significant dispersion among the different upper boundary conditions, not present in wind speed, revealing a sensitivity to the initialization method. The wind-direction errors show a well-defined daily cycle, not evident in wind speed, with the minimum at noon and the maximum at dusk, but no systematic deterioration with time. The errors grow with the height of the upper boundary condition level, in particular wind direction, and double the errors obtained when the upper boundary condition is defined from the lower levels. The conclusion is that defining the model upper boundary condition from radiosonde data closer to the ground minimizes the low-level wind-field errors throughout the region.

  5. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  6. ENSO Predictions in an Intermediate Coupled Model Influenced by Removing Initial Condition Errors in Sensitive Areas: A Target Observation Perspective

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-07-01

    Previous studies indicate that ENSO predictions are particularly sensitive to the initial conditions in some key areas (socalled "sensitive areas"). And yet, few studies have quantified improvements in prediction skill in the context of an optimal observing system. In this study, the impact on prediction skill is explored using an intermediate coupled model in which errors in initial conditions formed to make ENSO predictions are removed in certain areas. Based on ideal observing system simulation experiments, the importance of various observational networks on improvement of El Niño prediction skill is examined. The results indicate that the initial states in the central and eastern equatorial Pacific are important to improve El Ni˜no prediction skill effectively. When removing the initial condition errors in the central equatorial Pacific, ENSO prediction errors can be reduced by 25%. Furthermore, combinations of various subregions are considered to demonstrate the efficiency on ENSO prediction skill. Particularly, seasonally varying observational networks are suggested to improve the prediction skill more effectively. For example, in addition to observing in the central equatorial Pacific and its north throughout the year, increasing observations in the eastern equatorial Pacific during April to October is crucially important, which can improve the prediction accuracy by 62%. These results also demonstrate the effectiveness of the conditional nonlinear optimal perturbation approach on detecting sensitive areas for target observations.

  7. Effects of Heterogeneity and Uncertainties in Sources and Initial and Boundary Conditions on Spatiotemporal Variations of Groundwater Levels

    NASA Astrophysics Data System (ADS)

    Zhang, Y. K.; Liang, X.

    2014-12-01

    Effects of aquifer heterogeneity and uncertainties in source/sink, and initial and boundary conditions in a groundwater flow model on the spatiotemporal variations of groundwater level, h(x,t), were investigated. Analytical solutions for the variance and covariance of h(x, t) in an unconfined aquifer described by a linearized Boussinesq equation with a white noise source/sink and a random transmissivity field were derived. It was found that in a typical aquifer the error in h(x,t) in early time is mainly caused by the random initial condition and the error reduces as time goes to reach a constant error in later time. The duration during which the effect of the random initial condition is significant may last a few hundred days in most aquifers. The constant error in groundwater in later time is due to the combined effects of the uncertain source/sink and flux boundary: the closer to the flux boundary, the larger the error. The error caused by the uncertain head boundary is limited in a narrow zone near the boundary but it remains more or less constant over time. The effect of the heterogeneity is to increase the variation of groundwater level and the maximum effect occurs close to the constant head boundary because of the linear mean hydraulic gradient. The correlation of groundwater level decreases with temporal interval and spatial distance. In addition, the heterogeneity enhances the correlation of groundwater level, especially at larger time intervals and small spatial distances.

  8. Postural adjustment errors during lateral step initiation in older and younger adults

    PubMed Central

    Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.

    2016-01-01

    The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25595953

  9. Postural adjustment errors during lateral step initiation in older and younger adults

    PubMed Central

    Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.

    2014-01-01

    The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25183162

  10. Sensitivity of Forecast Skill to Different Objective Analysis Schemes

    NASA Technical Reports Server (NTRS)

    Baker, W. E.

    1979-01-01

    Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.

  11. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  12. Error analysis of finite difference schemes applied to hyperbolic initial boundary value problems

    NASA Technical Reports Server (NTRS)

    Skollermo, G.

    1979-01-01

    Finite difference methods for the numerical solution of mixed initial boundary value problems for hyperbolic equations are studied. The reported investigation has the objective to develop a technique for the total error analysis of a finite difference scheme, taking into account initial approximations, boundary conditions, and interior approximation. Attention is given to the Cauchy problem and the initial approximation, the homogeneous problem in an infinite strip with inhomogeneous boundary data, the reflection of errors in the boundaries, and two different boundary approximations for the leapfrog scheme with a fourth order accurate difference operator in space.

  13. MUSIC: MUlti-Scale Initial Conditions

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  14. 78 FR 15876 - Activation of Ice Protection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-13

    ... procedures in the Airplane Flight Manual for operating in icing conditions must be initiated. (2) Visual cues... procedures in the Airplane Flight Manual for operating in icing conditions must be initiated. (3) If the... operating rules for flight in icing conditions. This document corrects an error in the amendatory language...

  15. Errors in Postural Preparation Lead to Increased Choice Reaction Times for Step Initiation in Older Adults

    PubMed Central

    Nutt, John G.; Horak, Fay B.

    2011-01-01

    Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431

  16. Tropical forecasting - Predictability perspective

    NASA Technical Reports Server (NTRS)

    Shukla, J.

    1989-01-01

    Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.

  17. a Generic Probabilistic Model and a Hierarchical Solution for Sensor Localization in Noisy and Restricted Conditions

    NASA Astrophysics Data System (ADS)

    Ji, S.; Yuan, X.

    2016-06-01

    A generic probabilistic model, under fundamental Bayes' rule and Markov assumption, is introduced to integrate the process of mobile platform localization with optical sensors. And based on it, three relative independent solutions, bundle adjustment, Kalman filtering and particle filtering are deduced under different and additional restrictions. We want to prove that first, Kalman filtering, may be a better initial-value supplier for bundle adjustment than traditional relative orientation in irregular strips and networks or failed tie-point extraction. Second, in high noisy conditions, particle filtering can act as a bridge for gap binding when a large number of gross errors fail a Kalman filtering or a bundle adjustment. Third, both filtering methods, which help reduce the error propagation and eliminate gross errors, guarantee a global and static bundle adjustment, who requires the strictest initial values and control conditions. The main innovation is about the integrated processing of stochastic errors and gross errors in sensor observations, and the integration of the three most used solutions, bundle adjustment, Kalman filtering and particle filtering into a generic probabilistic localization model. The tests in noisy and restricted situations are designed and examined to prove them.

  18. Accessory stimulus modulates executive function during stepping task

    PubMed Central

    Watanabe, Tatsunori; Koyama, Soichiro; Tanabe, Shigeo

    2015-01-01

    When multiple sensory modalities are simultaneously presented, reaction time can be reduced while interference enlarges. The purpose of this research was to examine the effects of task-irrelevant acoustic accessory stimuli simultaneously presented with visual imperative stimuli on executive function during stepping. Executive functions were assessed by analyzing temporal events and errors in the initial weight transfer of the postural responses prior to a step (anticipatory postural adjustment errors). Eleven healthy young adults stepped forward in response to a visual stimulus. We applied a choice reaction time task and the Simon task, which consisted of congruent and incongruent conditions. Accessory stimuli were randomly presented with the visual stimuli. Compared with trials without accessory stimuli, the anticipatory postural adjustment error rates were higher in trials with accessory stimuli in the incongruent condition and the reaction times were shorter in trials with accessory stimuli in all the task conditions. Analyses after division of trials according to whether anticipatory postural adjustment error occurred or not revealed that the reaction times of trials with anticipatory postural adjustment errors were reduced more than those of trials without anticipatory postural adjustment errors in the incongruent condition. These results suggest that accessory stimuli modulate the initial motor programming of stepping by lowering decision threshold and exclusively under spatial incompatibility facilitate automatic response activation. The present findings advance the knowledge of intersensory judgment processes during stepping and may aid in the development of intervention and evaluation tools for individuals at risk of falls. PMID:25925321

  19. Bayesian network models for error detection in radiotherapy plans

    NASA Astrophysics Data System (ADS)

    Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.

    2015-04-01

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.

  20. Sensitivity of physical parameterizations on prediction of tropical cyclone Nargis over the Bay of Bengal using WRF model

    NASA Astrophysics Data System (ADS)

    Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.

    2011-09-01

    Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.

  1. Learning to Fail in Aphasia: An Investigation of Error Learning in Naming

    PubMed Central

    Middleton, Erica L.; Schwartz, Myrna F.

    2013-01-01

    Purpose To determine if the naming impairment in aphasia is influenced by error learning and if error learning is related to type of retrieval strategy. Method Nine participants with aphasia and ten neurologically-intact controls named familiar proper noun concepts. When experiencing tip-of-the-tongue naming failure (TOT) in an initial TOT-elicitation phase, participants were instructed to adopt phonological or semantic self-cued retrieval strategies. In the error learning manipulation, items evoking TOT states during TOT-elicitation were randomly assigned to a short or long time condition where participants were encouraged to continue to try to retrieve the name for either 20 seconds (short interval) or 60 seconds (long). The incidence of TOT on the same items was measured on a post test after 48-hours. Error learning was defined as a higher rate of recurrent TOTs (TOT at both TOT-elicitation and post test) for items assigned to the long (versus short) time condition. Results In the phonological condition, participants with aphasia showed error learning whereas controls showed a pattern opposite to error learning. There was no evidence for error learning in the semantic condition for either group. Conclusion Error learning is operative in aphasia, but dependent on the type of strategy employed during naming failure. PMID:23816662

  2. Quantum subsystems: Exploring the complementarity of quantum privacy and error correction

    NASA Astrophysics Data System (ADS)

    Jochym-O'Connor, Tomas; Kribs, David W.; Laflamme, Raymond; Plosker, Sarah

    2014-09-01

    This paper addresses and expands on the contents of the recent Letter [Phys. Rev. Lett. 111, 030502 (2013), 10.1103/PhysRevLett.111.030502] discussing private quantum subsystems. Here we prove several previously presented results, including a condition for a given random unitary channel to not have a private subspace (although this does not mean that private communication cannot occur, as was previously demonstrated via private subsystems) and algebraic conditions that characterize when a general quantum subsystem or subspace code is private for a quantum channel. These conditions can be regarded as the private analog of the Knill-Laflamme conditions for quantum error correction, and we explore how the conditions simplify in some special cases. The bridge between quantum cryptography and quantum error correction provided by complementary quantum channels motivates the study of a new, more general definition of quantum error-correcting code, and we initiate this study here. We also consider the concept of complementarity for the general notion of a private quantum subsystem.

  3. Nonlinear grid error effects on numerical solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Dey, S. K.

    1980-01-01

    Finite difference solutions of nonlinear partial differential equations require discretizations and consequently grid errors are generated. These errors strongly affect stability and convergence properties of difference models. Previously such errors were analyzed by linearizing the difference equations for solutions. Properties of mappings of decadence were used to analyze nonlinear instabilities. Such an analysis is directly affected by initial/boundary conditions. An algorithm was developed, applied to nonlinear Burgers equations, and verified computationally. A preliminary test shows that Navier-Stokes equations may be treated similarly.

  4. Definition of boundary and initial conditions in the analysis of saturated ground-water flow systems - An introduction

    USGS Publications Warehouse

    Franke, O. Lehn; Reilly, Thomas E.; Bennett, Gordon D.

    1987-01-01

    Accurate definition of boundary and initial conditions is an essential part of conceptualizing and modeling ground-water flow systems. This report describes the properties of the seven most common boundary conditions encountered in ground-water systems and discusses major aspects of their application. It also discusses the significance and specification of initial conditions and evaluates some common errors in applying this concept to ground-water-system models. An appendix is included that discusses what the solution of a differential equation represents and how the solution relates to the boundary conditions defining the specific problem. This report considers only boundary conditions that apply to saturated ground-water systems.

  5. Predicting areas of sustainable error growth in quasigeostrophic flows using perturbation alignment properties

    NASA Astrophysics Data System (ADS)

    Rivière, G.; Hua, B. L.

    2004-10-01

    A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.

  6. Perturbations in the initial soil moisture conditions: Impacts on hydrologic simulation in a large river basin

    NASA Astrophysics Data System (ADS)

    Niroula, Sundar; Halder, Subhadeep; Ghosh, Subimal

    2018-06-01

    Real time hydrologic forecasting requires near accurate initial condition of soil moisture; however, continuous monitoring of soil moisture is not operational in many regions, such as, in Ganga basin, extended in Nepal, India and Bangladesh. Here, we examine the impacts of perturbation/error in the initial soil moisture conditions on simulated soil moisture and streamflow in Ganga basin and its propagation, during the summer monsoon season (June to September). This provides information regarding the required minimum duration of model simulation for attaining the model stability. We use the Variable Infiltration Capacity model for hydrological simulations after validation. Multiple hydrologic simulations are performed, each of 21 days, initialized on every 5th day of the monsoon season for deficit, surplus and normal monsoon years. Each of these simulations is performed with the initial soil moisture condition obtained from long term runs along with positive and negative perturbations. The time required for the convergence of initial errors is obtained for all the cases. We find a quick convergence for the year with high rainfall as well as for the wet spells within a season. We further find high spatial variations in the time required for convergence; the region with high precipitation such as Lower Ganga basin attains convergence at a faster rate. Furthermore, deeper soil layers need more time for convergence. Our analysis is the first attempt on understanding the sensitivity of hydrological simulations of Ganga basin on initial soil moisture conditions. The results obtained here may be useful in understanding the spin-up requirements for operational hydrologic forecasts.

  7. Identifying sensitive areas of adaptive observations for prediction of the Kuroshio large meander using a shallow-water model

    NASA Astrophysics Data System (ADS)

    Zou, Guang'an; Wang, Qiang; Mu, Mu

    2016-09-01

    Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.

  8. Differential transfer processes in incremental visuomotor adaptation.

    PubMed

    Seidler, Rachel D

    2005-01-01

    Visuomotor adaptive processes were examined by testing transfer of adaptation between similar conditions. Participants made manual aiming movements with a joystick to hit targets on a computer screen, with real-time feedback display of their movement. They adapted to three different rotations of the display in a sequential fashion, with a return to baseline display conditions between rotations. Adaptation was better when participants had prior adaptive experiences. When performance was assessed using direction error (calculated at the time of peak velocity) and initial endpoint error (error before any overt corrective actions), transfer was greater when the final rotation reflected an addition of previously experienced rotations (adaptation order 30 degrees rotation, 15 degrees, 45 degrees) than when it was a subtraction of previously experienced conditions (adaptation order 45 degrees rotation, 15 degrees, 30 degrees). Transfer was equal regardless of adaptation order when performance was assessed with final endpoint error (error following any discrete, corrective actions). These results imply the existence of multiple independent processes in visuomotor adaptation.

  9. Neural evidence for enhanced error detection in major depressive disorder.

    PubMed

    Chiu, Pearl H; Deldin, Patricia J

    2007-04-01

    Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.

  10. Extension of Liouville Formalism to Postinstability Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2003-01-01

    A mathematical formalism has been developed for predicting the postinstability motions of a dynamic system governed by a system of nonlinear equations and subject to initial conditions. Previously, there was no general method for prediction and mathematical modeling of postinstability behaviors (e.g., chaos and turbulence) in such a system. The formalism of nonlinear dynamics does not afford means to discriminate between stable and unstable motions: an additional stability analysis is necessary for such discrimination. However, an additional stability analysis does not suggest any modifications of a mathematical model that would enable the model to describe postinstability motions efficiently. The most important type of instability that necessitates a postinstability description is associated with positive Lyapunov exponents. Such an instability leads to exponential growth of small errors in initial conditions or, equivalently, exponential divergence of neighboring trajectories. The development of the present formalism was undertaken in an effort to remove positive Lyapunov exponents. The means chosen to accomplish this is coupling of the governing dynamical equations with the corresponding Liouville equation that describes the evolution of the flow of error probability. The underlying idea is to suppress the divergences of different trajectories that correspond to different initial conditions, without affecting a target trajectory, which is one that starts with prescribed initial conditions.

  11. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  12. A simple analytical infiltration model for short-duration rainfall

    NASA Astrophysics Data System (ADS)

    Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming

    2017-12-01

    Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.

  13. A Comparison of the Forecast Skills among Three Numerical Models

    NASA Astrophysics Data System (ADS)

    Lu, D.; Reddy, S. R.; White, L. J.

    2003-12-01

    Three numerical weather forecast models, MM5, COAMPS and WRF, operating with a joint effort of NOAA HU-NCAS and Jackson State University (JSU) during summer 2003 have been chosen to study their forecast skills against observations. The models forecast over the same region with the same initialization, boundary condition, forecast length and spatial resolution. AVN global dataset have been ingested as initial conditions. Grib resolution of 27 km is chosen to represent the current mesoscale model. The forecasts with the length of 36h are performed to output the result with 12h interval. The key parameters used to evaluate the forecast skill include 12h accumulated precipitation, sea level pressure, wind, surface temperature and dew point. Precipitation is evaluated statistically using conventional skill scores, Threat Score (TS) and Bias Score (BS), for different threshold values based on 12h rainfall observations whereas other statistical methods such as Mean Error (ME), Mean Absolute Error(MAE) and Root Mean Square Error (RMSE) are applied to other forecast parameters.

  14. Simulated forecast error and climate drift resulting from the omission of the upper stratosphere in numerical models

    NASA Technical Reports Server (NTRS)

    Boville, Byron A.; Baumhefner, David P.

    1990-01-01

    Using an NCAR community climate model, Version I, the forecast error growth and the climate drift resulting from the omission of the upper stratosphere are investigated. In the experiment, the control simulation is a seasonal integration of a medium horizontal general circulation model with 30 levels extending from the surface to the upper mesosphere, while the main experiment uses an identical model, except that only the bottom 15 levels (below 10 mb) are retained. It is shown that both random and systematic errors develop rapidly in the lower stratosphere with some local propagation into the troposphere in the 10-30-day time range. The random growth rate in the troposphere in the case of the altered upper boundary was found to be slightly faster than that for the initial-condition uncertainty alone. However, this is not likely to make a significant impact in operational forecast models, because the initial-condition uncertainty is very large.

  15. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    PubMed Central

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  16. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  17. Possible sources of forecast errors generated by the global/regional assimilation and prediction system for landfalling tropical cyclones. Part I: Initial uncertainties

    NASA Astrophysics Data System (ADS)

    Zhou, Feifan; Yamaguchi, Munehiko; Qin, Xiaohao

    2016-07-01

    This paper investigates the possible sources of errors associated with tropical cyclone (TC) tracks forecasted using the Global/Regional Assimilation and Prediction System (GRAPES). The GRAPES forecasts were made for 16 landfalling TCs in the western North Pacific basin during the 2008 and 2009 seasons, with a forecast length of 72 hours, and using the default initial conditions ("initials", hereafter), which are from the NCEP-FNL dataset, as well as ECMWF initials. The forecasts are compared with ECMWF forecasts. The results show that in most TCs, the GRAPES forecasts are improved when using the ECMWF initials compared with the default initials. Compared with the ECMWF initials, the default initials produce lower intensity TCs and a lower intensity subtropical high, but a higher intensity South Asia high and monsoon trough, as well as a higher temperature but lower specific humidity at the TC center. Replacement of the geopotential height and wind fields with the ECMWF initials in and around the TC center at the initial time was found to be the most efficient way to improve the forecasts. In addition, TCs that showed the greatest improvement in forecast accuracy usually had the largest initial uncertainties in TC intensity and were usually in the intensifying phase. The results demonstrate the importance of the initial intensity for TC track forecasts made using GRAPES, and indicate the model is better in describing the intensifying phase than the decaying phase of TCs. Finally, the limit of the improvement indicates that the model error associated with GRAPES forecasts may be the main cause of poor forecasts of landfalling TCs. Thus, further examinations of the model errors are required.

  18. Daniel K. Inouye Solar Telescope: computational fluid dynamic analyses and evaluation of the air knife model

    NASA Astrophysics Data System (ADS)

    McQuillen, Isaac; Phelps, LeEllen; Warner, Mark; Hubbard, Robert

    2016-08-01

    Implementation of an air curtain at the thermal boundary between conditioned and ambient spaces allows for observation over wavelength ranges not practical when using optical glass as a window. The air knife model of the Daniel K. Inouye Solar Telescope (DKIST) project, a 4-meter solar observatory that will be built on Haleakalā, Hawai'i, deploys such an air curtain while also supplying ventilation through the ceiling of the coudé laboratory. The findings of computational fluid dynamics (CFD) analysis and subsequent changes to the air knife model are presented. Major design constraints include adherence to the Interface Control Document (ICD), separation of ambient and conditioned air, unidirectional outflow into the coudé laboratory, integration of a deployable glass window, and maintenance and accessibility requirements. Optimized design of the air knife successfully holds full 12 Pa backpressure under temperature gradients of up to 20°C while maintaining unidirectional outflow. This is a significant improvement upon the .25 Pa pressure differential that the initial configuration, tested by Linden and Phelps, indicated the curtain could hold. CFD post- processing, developed by Vogiatzis, is validated against interferometry results of initial air knife seeing evaluation, performed by Hubbard and Schoening. This is done by developing a CFD simulation of the initial experiment and using Vogiatzis' method to calculate error introduced along the optical path. Seeing error, for both temperature differentials tested in the initial experiment, match well with seeing results obtained from the CFD analysis and thus validate the post-processing model. Application of this model to the realizable air knife assembly yields seeing errors that are well within the error budget under which the air knife interface falls, even with a temperature differential of 20°C between laboratory and ambient spaces. With ambient temperature set to 0°C and conditioned temperature set to 20°C, representing the worst-case temperature gradient, the spatial rms wavefront error in units of wavelength is 0.178 (88.69 nm at λ = 500 nm).

  19. Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions

    PubMed Central

    Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.

    2010-01-01

    Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256

  20. Attitude guidance and tracking for spacecraft with two reaction wheels

    NASA Astrophysics Data System (ADS)

    Biggs, James D.; Bai, Yuliang; Henninger, Helen

    2018-04-01

    This paper addresses the guidance and tracking problem for a rigid-spacecraft using two reaction wheels (RWs). The guidance problem is formulated as an optimal control problem on the special orthogonal group SO(3). The optimal motion is solved analytically as a function of time and is used to reduce the original guidance problem to one of computing the minimum of a nonlinear function. A tracking control using two RWs is developed that extends previous singular quaternion stabilisation controls to tracking controls on the rotation group. The controller is proved to locally asymptotically track the generated reference motions using Lyapunov's direct method. Simulations of a 3U CubeSat demonstrate that this tracking control is robust to initial rotation errors and angular velocity errors in the controlled axis. For initial angular velocity errors in the uncontrolled axis and under significant disturbances the control fails to track. However, the singular tracking control is combined with a nano-magnetic torquer which simply damps the angular velocity in the uncontrolled axis and is shown to provide a practical control method for tracking in the presence of disturbances and initial condition errors.

  1. Neural Correlates of User-initiated Motor Success and Failure - A Brain-Computer Interface Perspective.

    PubMed

    Yazmir, Boris; Reiner, Miriam

    2018-05-15

    Any motor action is, by nature, potentially accompanied by human errors. In order to facilitate development of error-tailored Brain-Computer Interface (BCI) correction systems, we focused on internal, human-initiated errors, and investigated EEG correlates of user outcome successes and errors during a continuous 3D virtual tennis game against a computer player. We used a multisensory, 3D, highly immersive environment. Missing and repelling the tennis ball were considered, as 'error' (miss) and 'success' (repel). Unlike most previous studies, where the environment "encouraged" the participant to perform a mistake, here errors happened naturally, resulting from motor-perceptual-cognitive processes of incorrect estimation of the ball kinematics, and can be regarded as user internal, self-initiated errors. Results show distinct and well-defined Event-Related Potentials (ERPs), embedded in the ongoing EEG, that differ across conditions by waveforms, scalp signal distribution maps, source estimation results (sLORETA) and time-frequency patterns, establishing a series of typical features that allow valid discrimination between user internal outcome success and error. The significant delay in latency between positive peaks of error- and success-related ERPs, suggests a cross-talk between top-down and bottom-up processing, represented by an outcome recognition process, in the context of the game world. Success-related ERPs had a central scalp distribution, while error-related ERPs were centro-parietal. The unique characteristics and sharp differences between EEG correlates of error/success provide the crucial components for an improved BCI system. The features of the EEG waveform can be used to detect user action outcome, to be fed into the BCI correction system. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Error affect inoculation for a complex decision-making task.

    PubMed

    Tabernero, Carmen; Wood, Robert E

    2009-05-01

    Individuals bring knowledge, implicit theories, and goal orientations to group meetings. Group decisions arise out of the exchange of these orientations. This research explores how a trainee's exploratory and deliberate process (an incremental theory and learning goal orientation) impacts the effectiveness of individual and group decision-making processes. The effectiveness of this training program is compared with another program that included error affect inoculation (EAI). Subjects were 40 Spanish Policemen in a training course. They were distributed in two training conditions for an individual and group decision-making task. In one condition, individuals received the Self-Guided Exploration plus Deliberation Process instructions, which emphasised exploring the options and testing hypotheses. In the other condition, individuals also received instructions based on Error Affect Inoculation (EAI), which emphasised positive affective reactions to errors and mistakes when making decisions. Results show that the quality of decisions increases when the groups share their reasoning. The AIE intervention promotes sharing information, flexible initial viewpoints, and improving the quality of group decisions. Implications and future directions are discussed.

  3. Error-growth dynamics and predictability of surface thermally induced atmospheric flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, X.; Pielke, R.A.

    1993-09-01

    Using the CSU Regional Atmospheric Modeling System (RAMS) in its nonhydrostatic and compressible configuration, over 200 two-dimensional simulations with [Delta]x = 2 km and [Delta]x = 100 m are performed to study in detail the initial adjustment process and the error-growth dynamics of surface thermally induced circulation including the sensitivity to initial conditions, boundary conditions, and model parameters, and to study the predictability as a function of the size of surface heat patches under a calm mean wind. It is found that the error growth is not sensitive to the characterisitics of the initial perturbations. The numerical smoothing has amore » strong impact on the initial adjustment process and on the error-growth dynamics. The predictability and flow structures, it is found that the vertical velocity field is strongly affected by the mean wind, and the flow structures are quite sensitive to the initial soil water content. The transition from organized flow to the situation in which fluxes are dominated by noncoherent turbulent eddies under a calm mean wind is quantitatively evaluated and this transition is different for different variables. The relationship between the predictability of a realization and of an ensemble average is discussed. The predictability and the coherent circulations modulated by the surface inhomogeneities are also studied by computing the autocorrelations and the power spectra. The three-dimensional mesoscale and large-eddy simulations are performed to verify the above results. It is found that the two-dimensional mesoscale (or fine resolution) simulation yields very close or similar results regarding the predictability as those from the three-dimensional mesoscale (or large eddy) simulation. The horizontally averaged quantities based on two-dimensional fine-resolution simulations are insensitive to initial perturbations and agree with those based on three-dimensional large-eddy simulations. 87 refs., 25 figs.« less

  4. Data assimilation method based on the constraints of confidence region

    NASA Astrophysics Data System (ADS)

    Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng

    2018-03-01

    The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.

  5. Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method

    NASA Technical Reports Server (NTRS)

    Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.

    1990-01-01

    A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.

  6. Estimation of chaotic coupled map lattices using symbolic vector dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya

    2010-01-01

    In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.

  7. Emotion blocks the path to learning under stereotype threat

    PubMed Central

    Good, Catherine; Whiteman, Ronald C.; Maniscalco, Brian; Dweck, Carol S.

    2012-01-01

    Gender-based stereotypes undermine females’ performance on challenging math tests, but how do they influence their ability to learn from the errors they make? Females under stereotype threat or non-threat were presented with accuracy feedback after each problem on a GRE-like math test, followed by an optional interactive tutorial that provided step-wise problem-solving instruction. Event-related potentials tracked the initial detection of the negative feedback following errors [feedback related negativity (FRN), P3a], as well as any subsequent sustained attention/arousal to that information [late positive potential (LPP)]. Learning was defined as success in applying tutorial information to correction of initial test errors on a surprise retest 24-h later. Under non-threat conditions, emotional responses to negative feedback did not curtail exploration of the tutor, and the amount of tutor exploration predicted learning success. In the stereotype threat condition, however, greater initial salience of the failure (FRN) predicted less exploration of the tutor, and sustained attention to the negative feedback (LPP) predicted poor learning from what was explored. Thus, under stereotype threat, emotional responses to negative feedback predicted both disengagement from learning and interference with learning attempts. We discuss the importance of emotion regulation in successful rebound from failure for stigmatized groups in stereotype-salient environments. PMID:21252312

  8. Emotion blocks the path to learning under stereotype threat.

    PubMed

    Mangels, Jennifer A; Good, Catherine; Whiteman, Ronald C; Maniscalco, Brian; Dweck, Carol S

    2012-02-01

    Gender-based stereotypes undermine females' performance on challenging math tests, but how do they influence their ability to learn from the errors they make? Females under stereotype threat or non-threat were presented with accuracy feedback after each problem on a GRE-like math test, followed by an optional interactive tutorial that provided step-wise problem-solving instruction. Event-related potentials tracked the initial detection of the negative feedback following errors [feedback related negativity (FRN), P3a], as well as any subsequent sustained attention/arousal to that information [late positive potential (LPP)]. Learning was defined as success in applying tutorial information to correction of initial test errors on a surprise retest 24-h later. Under non-threat conditions, emotional responses to negative feedback did not curtail exploration of the tutor, and the amount of tutor exploration predicted learning success. In the stereotype threat condition, however, greater initial salience of the failure (FRN) predicted less exploration of the tutor, and sustained attention to the negative feedback (LPP) predicted poor learning from what was explored. Thus, under stereotype threat, emotional responses to negative feedback predicted both disengagement from learning and interference with learning attempts. We discuss the importance of emotion regulation in successful rebound from failure for stigmatized groups in stereotype-salient environments.

  9. Rapid and accurate prediction of degradant formation rates in pharmaceutical formulations using high-performance liquid chromatography-mass spectrometry.

    PubMed

    Darrington, Richard T; Jiao, Jim

    2004-04-01

    Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.

  10. Numerical simulation of a low-lying barrier island's morphological response to Hurricane Katrina

    USGS Publications Warehouse

    Lindemer, C.A.; Plant, N.G.; Puleo, J.A.; Thompson, D.M.; Wamsley, T.V.

    2010-01-01

    Tropical cyclones that enter or form in the Gulf of Mexico generate storm surge and large waves that impact low-lying coastlines along the Gulf Coast. The Chandeleur Islands, located 161. km east of New Orleans, Louisiana, have endured numerous hurricanes that have passed nearby. Hurricane Katrina (landfall near Waveland MS, 29 Aug 2005) caused dramatic changes to the island elevation and shape. In this paper the predictability of hurricane-induced barrier island erosion and accretion is evaluated using a coupled hydrodynamic and morphodynamic model known as XBeach. Pre- and post-storm island topography was surveyed with an airborne lidar system. Numerical simulations utilized realistic surge and wave conditions determined from larger-scale hydrodynamic models. Simulations included model sensitivity tests with varying grid size and temporal resolutions. Model-predicted bathymetry/topography and post-storm survey data both showed similar patterns of island erosion, such as increased dissection by channels. However, the model under predicted the magnitude of erosion. Potential causes for under prediction include (1) errors in the initial conditions (the initial bathymetry/topography was measured three years prior to Katrina), (2) errors in the forcing conditions (a result of our omission of storms prior to Katrina and/or errors in Katrina storm conditions), and/or (3) physical processes that were omitted from the model (e.g., inclusion of sediment variations and bio-physical processes). ?? 2010.

  11. Decomposition of Sources of Errors in Seasonal Streamflow Forecasts in a Rainfall-Runoff Dominated Basin

    NASA Astrophysics Data System (ADS)

    Sinha, T.; Arumugam, S.

    2012-12-01

    Seasonal streamflow forecasts contingent on climate forecasts can be effectively utilized in updating water management plans and optimize generation of hydroelectric power. Streamflow in the rainfall-runoff dominated basins critically depend on forecasted precipitation in contrast to snow dominated basins, where initial hydrological conditions (IHCs) are more important. Since precipitation forecasts from Atmosphere-Ocean-General Circulation Models are available at coarse scale (~2.8° by 2.8°), spatial and temporal downscaling of such forecasts are required to implement land surface models, which typically runs on finer spatial and temporal scales. Consequently, multiple sources are introduced at various stages in predicting seasonal streamflow. Therefore, in this study, we addresses the following science questions: 1) How do we attribute the errors in monthly streamflow forecasts to various sources - (i) model errors, (ii) spatio-temporal downscaling, (iii) imprecise initial conditions, iv) no forecasts, and (iv) imprecise forecasts? and 2) How does monthly streamflow forecast errors propagate with different lead time over various seasons? In this study, the Variable Infiltration Capacity (VIC) model is calibrated over Apalachicola River at Chattahoochee, FL in the southeastern US and implemented with observed 1/8° daily forcings to estimate reference streamflow during 1981 to 2010. The VIC model is then forced with different schemes under updated IHCs prior to forecasting period to estimate relative mean square errors due to: a) temporally disaggregation, b) spatial downscaling, c) Reverse Ensemble Streamflow Prediction (imprecise IHCs), d) ESP (no forecasts), and e) ECHAM4.5 precipitation forecasts. Finally, error propagation under different schemes are analyzed with different lead time over different seasons.

  12. Quantifying Information Gain from Dynamic Downscaling Experiments

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Peters-Lidard, C. D.

    2015-12-01

    Dynamic climate downscaling experiments are designed to produce information at higher spatial and temporal resolutions. Such additional information is generated from the low-resolution initial and boundary conditions via the predictive power of the physical laws. However, errors and uncertainties in the initial and boundary conditions can be propagated and even amplified to the downscaled simulations. Additionally, the limit of predictability in nonlinear dynamical systems will also damper the information gain, even if the initial and boundary conditions were error-free. Thus it is critical to quantitatively define and measure the amount of information increase from dynamic downscaling experiments, to better understand and appreciate their potentials and limitations. We present a scheme to objectively measure the information gain from such experiments. The scheme is based on information theory, and we argue that if a downscaling experiment is to exhibit value, it has to produce more information than what can be simply inferred from information sources already available. These information sources include the initial and boundary conditions, the coarse resolution model in which the higher-resolution models are embedded, and the same set of physical laws. These existing information sources define an "information threshold" as a function of the spatial and temporal resolution, and this threshold serves as a benchmark to quantify the information gain from the downscaling experiments, or any other approaches. For a downscaling experiment to shown any value, the information has to be above this threshold. A recent NASA-supported downscaling experiment is used as an example to illustrate the application of this scheme.

  13. Optimal strategies for throwing accurately

    PubMed Central

    2017-01-01

    The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed–accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error. PMID:28484641

  14. Optimal strategies for throwing accurately

    NASA Astrophysics Data System (ADS)

    Venkadesan, M.; Mahadevan, L.

    2017-04-01

    The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed-accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  16. Motivational state controls the prediction error in Pavlovian appetitive-aversive interactions.

    PubMed

    Laurent, Vincent; Balleine, Bernard W; Westbrook, R Frederick

    2018-01-01

    Contemporary theories of learning emphasize the role of a prediction error signal in driving learning, but the nature of this signal remains hotly debated. Here, we used Pavlovian conditioning in rats to investigate whether primary motivational and emotional states interact to control prediction error. We initially generated cues that positively or negatively predicted an appetitive food outcome. We then assessed how these cues modulated aversive conditioning when a novel cue was paired with a foot shock. We found that a positive predictor of food enhances, whereas a negative predictor of that same food impairs, aversive conditioning. Critically, we also showed that the enhancement produced by the positive predictor is removed by reducing the value of its associated food. In contrast, the impairment triggered by the negative predictor remains insensitive to devaluation of its associated food. These findings provide compelling evidence that the motivational value attributed to a predicted food outcome can directly control appetitive-aversive interactions and, therefore, that motivational processes can modulate emotional processes to generate the final error term on which subsequent learning is based. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. More on Projectile Motion.

    ERIC Educational Resources Information Center

    Molina, M. I.

    2000-01-01

    Mathematically explains why the range of a projectile is most insensitive to aiming errors when the initial angle is close to 45 degrees, whereas other observables such as maximum height or flight time are most insensitive for near-vertical launching conditions. (WRM)

  18. Evaluation of Planetary Boundary Layer Scheme Sensitivities for the Purpose of Parameter Estimation

    EPA Science Inventory

    Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from ...

  19. [Degradation kinetics of ozone oxidation on high concentration of humic substances].

    PubMed

    Zheng, Ke; Zhou, Shao-Qi; Yang, Mei-Mei

    2012-03-01

    Humic substance oxidation (HS) degradation by ozone was kinetically investigated. The effects of O3 dosage, initial pH, temperature and initial concentration of HS were studied. Under the conditions of 3.46 g x h(-1) ozone dosage, 1 000 mg x L(-1) initial HS, 8.0 initial pH and 303 K temperature, the removal efficiencies of HS achieved 89.04% at 30 min. The empirical kinetic equation of ozonation degradation for landfill leachate under the conditions of 1.52-6.10 g x h(-1) ozone dosage, 250-1 000 mg x L(-1) initial HS, 2.0-10.0 initial pH, 283-323 K temperature fitted well with the experimental data (average relative error is 7.62%), with low activation energy E(a) = 1.43 x 10(4)J x mol(-1).

  20. Adaptation of catch-up saccades during the initiation of smooth pursuit eye movements.

    PubMed

    Schütz, Alexander C; Souto, David

    2011-04-01

    Reduction of retinal speed and alignment of the line of sight are believed to be the respective primary functions of smooth pursuit and saccadic eye movements. As the eye muscles strength can change in the short-term, continuous adjustments of motor signals are required to achieve constant accuracy. While adaptation of saccade amplitude to systematic position errors has been extensively studied, we know less about the adaptive response to position errors during smooth pursuit initiation, when target motion has to be taken into account to program saccades, and when position errors at the saccade endpoint could also be corrected by increasing pursuit velocity. To study short-term adaptation (250 adaptation trials) of tracking eye movements, we introduced a position error during the first catch-up saccade made during the initiation of smooth pursuit-in a ramp-step-ramp paradigm. The target position was either shifted in the direction of the horizontally moving target (forward step), against it (backward step) or orthogonally to it (vertical step). Results indicate adaptation of catch-up saccade amplitude to back and forward steps. With vertical steps, saccades became oblique, by an inflexion of the early or late saccade trajectory. With a similar time course, post-saccadic pursuit velocity was increased in the step direction, adding further evidence that under some conditions pursuit and saccades can act synergistically to reduce position errors.

  1. Stochastic Forcing for High-Resolution Regional and Global Ocean and Atmosphere-Ocean Coupled Ensemble Forecast System

    NASA Astrophysics Data System (ADS)

    Rowley, C. D.; Hogan, P. J.; Martin, P.; Thoppil, P.; Wei, M.

    2017-12-01

    An extended range ensemble forecast system is being developed in the US Navy Earth System Prediction Capability (ESPC), and a global ocean ensemble generation capability to represent uncertainty in the ocean initial conditions has been developed. At extended forecast times, the uncertainty due to the model error overtakes the initial condition as the primary source of forecast uncertainty. Recently, stochastic parameterization or stochastic forcing techniques have been applied to represent the model error in research and operational atmospheric, ocean, and coupled ensemble forecasts. A simple stochastic forcing technique has been developed for application to US Navy high resolution regional and global ocean models, for use in ocean-only and coupled atmosphere-ocean-ice-wave ensemble forecast systems. Perturbation forcing is added to the tendency equations for state variables, with the forcing defined by random 3- or 4-dimensional fields with horizontal, vertical, and temporal correlations specified to characterize different possible kinds of error. Here, we demonstrate the stochastic forcing in regional and global ensemble forecasts with varying perturbation amplitudes and length and time scales, and assess the change in ensemble skill measured by a range of deterministic and probabilistic metrics.

  2. Seasonal prediction of Indian summer monsoon rainfall in NCEP CFSv2: forecast and predictability error

    NASA Astrophysics Data System (ADS)

    Pokhrel, Samir; Saha, Subodh Kumar; Dhakate, Ashish; Rahman, Hasibur; Chaudhari, Hemantkumar S.; Salunke, Kiran; Hazra, Anupam; Sujith, K.; Sikka, D. R.

    2016-04-01

    A detailed analysis of sensitivity to the initial condition for the simulation of the Indian summer monsoon using retrospective forecast by the latest version of the Climate Forecast System version-2 (CFSv2) is carried out. This study primarily focuses on the tropical region of Indian and Pacific Ocean basin, with special emphasis on the Indian land region. The simulated seasonal mean and the inter-annual standard deviations of rainfall, upper and lower level atmospheric circulations and Sea Surface Temperature (SST) tend to be more skillful as the lead forecast time decreases (5 month lead to 0 month lead time i.e. L5-L0). In general spatial correlation (bias) increases (decreases) as forecast lead time decreases. This is further substantiated by their averaged value over the selected study regions over the Indian and Pacific Ocean basins. The tendency of increase (decrease) of model bias with increasing (decreasing) forecast lead time also indicates the dynamical drift of the model. Large scale lower level circulation (850 hPa) shows enhancement of anomalous westerlies (easterlies) over the tropical region of the Indian Ocean (Western Pacific Ocean), which indicates the enhancement of model error with the decrease in lead time. At the upper level circulation (200 hPa) biases in both tropical easterly jet and subtropical westerlies jet tend to decrease as the lead time decreases. Despite enhancement of the prediction skill, mean SST bias seems to be insensitive to the initialization. All these biases are significant and together they make CFSv2 vulnerable to seasonal uncertainties in all the lead times. Overall the zeroth lead (L0) seems to have the best skill, however, in case of Indian summer monsoon rainfall (ISMR), the 3 month lead forecast time (L3) has the maximum ISMR prediction skill. This is valid using different independent datasets, wherein these maximum skill scores are 0.64, 0.42 and 0.57 with respect to the Global Precipitation Climatology Project, CPC Merged Analysis of Precipitation and the India Meteorological Department precipitation dataset respectively for L3. Despite significant El-Niño Southern Oscillation (ENSO) spring predictability barrier at L3, the ISMR skill score is highest at L3. Further, large scale zonal wind shear (Webster-Yang index) and SST over Niño3.4 region is best at L1 and L0. This implies that predictability aspect of ISMR is controlled by factors other than ENSO and Indian Ocean Dipole. Also, the model error (forecast error) outruns the error acquired by the inadequacies in the initial conditions (predictability error). Thus model deficiency is having more serious consequences as compared to the initial condition error for the seasonal forecast. All the model parameters show the increase in the predictability error as the lead decreases over the equatorial eastern Pacific basin and peaks at L2, then it further decreases. The dynamical consistency of both the forecast and the predictability error among all the variables indicates that these biases are purely systematic in nature and improvement of the physical processes in the CFSv2 may enhance the overall predictability.

  3. The Forced Soft Spring Equation

    ERIC Educational Resources Information Center

    Fay, T. H.

    2006-01-01

    Through numerical investigations, this paper studies examples of the forced Duffing type spring equation with [epsilon] negative. By performing trial-and-error numerical experiments, the existence is demonstrated of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions. Subharmonic boundaries are…

  4. Movement initiation-locked activity of the anterior putamen predicts future movement instability in periodic bimanual movement.

    PubMed

    Aramaki, Yu; Haruno, Masahiko; Osu, Rieko; Sadato, Norihiro

    2011-07-06

    In periodic bimanual movements, anti-phase-coordinated patterns often change into in-phase patterns suddenly and involuntarily. Because behavior in the initial period of a sequence of cycles often does not show any obvious errors, it is difficult to predict subsequent movement errors in the later period of the cyclical sequence. Here, we evaluated performance in the later period of the cyclical sequence of bimanual periodic movements using human brain activity measured with functional magnetic resonance imaging as well as using initial movement features. Eighteen subjects performed a 30 s bimanual finger-tapping task. We calculated differences in initiation-locked transient brain activity between antiphase and in-phase tapping conditions. Correlation analysis revealed that the difference in the anterior putamen activity during antiphase compared within-phase tapping conditions was strongly correlated with future instability as measured by the mean absolute deviation of the left-hand intertap interval during antiphase movements relative to in-phase movements (r = 0.81). Among the initial movement features we measured, only the number of taps to establish the antiphase movement pattern exhibited a significant correlation. However, the correlation efficient of 0.60 was not high enough to predict the characteristics of subsequent movement. There was no significant correlation between putamen activity and initial movement features. It is likely that initiating unskilled difficult movements requires increased anterior putamen activity, and this activity increase may facilitate the initiation of movement via the basal ganglia-thalamocortical circuit. Our results suggest that initiation-locked transient activity of the anterior putamen can be used to predict future motor performance.

  5. Soil Moisture Initialization Error and Subgrid Variability of Precipitation in Seasonal Streamflow Forecasting

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Walker, Gregory K.; Mahanama, Sarith P.; Reichle, Rolf H.

    2013-01-01

    Offline simulations over the conterminous United States (CONUS) with a land surface model are used to address two issues relevant to the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which a realistic increase in the spatial resolution of forecasted precipitation would improve streamflow forecasts. The addition of error to a soil moisture initialization field is found to lead to a nearly proportional reduction in streamflow forecast skill. The linearity of the response allows the determination of a lower bound for the increase in streamflow forecast skill achievable through improved soil moisture estimation, e.g., through satellite-based soil moisture measurements. An increase in the resolution of precipitation is found to have an impact on large-scale streamflow forecasts only when evaporation variance is significant relative to the precipitation variance. This condition is met only in the western half of the CONUS domain. Taken together, the two studies demonstrate the utility of a continental-scale land surface modeling system as a tool for addressing the science of hydrological prediction.

  6. Simulations in site error estimation for direction finders

    NASA Astrophysics Data System (ADS)

    López, Raúl E.; Passi, Ranjit M.

    1991-08-01

    The performance of an algorithm for the recovery of site-specific errors of direction finder (DF) networks is tested under controlled simulated conditions. The simulations show that the algorithm has some inherent shortcomings for the recovery of site errors from the measured azimuth data. These limitations are fundamental to the problem of site error estimation using azimuth information. Several ways for resolving or ameliorating these basic complications are tested by means of simulations. From these it appears that for the effective implementation of the site error determination algorithm, one should design the networks with at least four DFs, improve the alignment of the antennas, and increase the gain of the DFs as much as it is compatible with other operational requirements. The use of a nonzero initial estimate of the site errors when working with data from networks of four or more DFs also improves the accuracy of the site error recovery. Even for networks of three DFs, reasonable site error corrections could be obtained if the antennas could be well aligned.

  7. Measuring Reading Performance Informally.

    ERIC Educational Resources Information Center

    Powell, William R.

    To improve the accuracy of the informal reading inventory (IRI), a differential set of criteria is necessary for both word recognition and comprehension scores for different levels and reading conditions. In initial evaluation, word recognition scores should reflect only errors of insertions, omissions, mispronunciations, substitiutions, unkown…

  8. A Quadratic Spring Equation

    ERIC Educational Resources Information Center

    Fay, Temple H.

    2010-01-01

    Through numerical investigations, we study examples of the forced quadratic spring equation [image omitted]. By performing trial-and-error numerical experiments, we demonstrate the existence of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions, investigate the resonance boundary in the [omega]…

  9. Pre- and postprocessing techniques for determining goodness of computational meshes

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Westermann, T.; Bass, J. M.

    1993-01-01

    Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.

  10. On the conditions of exponential stability in active disturbance rejection control based on singular perturbation analysis

    NASA Astrophysics Data System (ADS)

    Shao, S.; Gao, Z.

    2017-10-01

    Stability of active disturbance rejection control (ADRC) is analysed in the presence of unknown, nonlinear, and time-varying dynamics. In the framework of singular perturbations, the closed-loop error dynamics are semi-decoupled into a relatively slow subsystem (the feedback loop) and a relatively fast subsystem (the extended state observer), respectively. It is shown, analytically and geometrically, that there exists a unique exponential stable solution if the size of the initial observer error is sufficiently small, i.e. in the same order of the inverse of the observer bandwidth. The process of developing the uniformly asymptotic solution of the system reveals the condition on the stability of the ADRC and the relationship between the rate of change in the total disturbance and the size of the estimation error. The differentiability of the total disturbance is the only assumption made.

  11. Tectonic predictions with mantle convection models

    NASA Astrophysics Data System (ADS)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough for an accurate prediction of instantaneous flow, but not for a prediction after 10 My of evolution. Therefore, inverse methods (sequential or data assimilation methods) using short-term fully dynamic evolution that predict surface kinematics are promising tools for a better understanding of the state of the Earth's mantle.

  12. Oceanic ensemble forecasting in the Gulf of Mexico: An application to the case of the Deep Water Horizon oil spill

    NASA Astrophysics Data System (ADS)

    Khade, Vikram; Kurian, Jaison; Chang, Ping; Szunyogh, Istvan; Thyng, Kristen; Montuoro, Raffaele

    2017-05-01

    This paper demonstrates the potential of ocean ensemble forecasting in the Gulf of Mexico (GoM). The Bred Vector (BV) technique with one week rescaling frequency is implemented on a 9 km resolution version of the Regional Ocean Modelling System (ROMS). Numerical experiments are carried out by using the HYCOM analysis products to define the initial conditions and the lateral boundary conditions. The growth rates of the forecast uncertainty are estimated to be about 10% of initial amplitude per week. By carrying out ensemble forecast experiments with and without perturbed surface forcing, it is demonstrated that in the coastal regions accounting for uncertainties in the atmospheric forcing is more important than accounting for uncertainties in the ocean initial conditions. In the Loop Current region, the initial condition uncertainties, are the dominant source of the forecast uncertainty. The root-mean-square error of the Lagrangian track forecasts at the 15-day forecast lead time can be reduced by about 10 - 50 km using the ensemble mean Eulerian forecast of the oceanic flow for the computation of the tracks, instead of the single-initial-condition Eulerian forecast.

  13. Improved forecasts of winter weather extremes over midlatitudes with extra Arctic observations

    NASA Astrophysics Data System (ADS)

    Sato, Kazutoshi; Inoue, Jun; Yamazaki, Akira; Kim, Joo-Hong; Maturilli, Marion; Dethloff, Klaus; Hudson, Stephen R.; Granskog, Mats A.

    2017-02-01

    Recent cold winter extremes over Eurasia and North America have been considered to be a consequence of a warming Arctic. More accurate weather forecasts are required to reduce human and socioeconomic damages associated with severe winters. However, the sparse observing network over the Arctic brings errors in initializing a weather prediction model, which might impact accuracy of prediction results at midlatitudes. Here we show that additional Arctic radiosonde observations from the Norwegian young sea ICE expedition (N-ICE2015) drifting ice camps and existing land stations during winter improved forecast skill and reduced uncertainties of weather extremes at midlatitudes of the Northern Hemisphere. For two winter storms over East Asia and North America in February 2015, ensemble forecast experiments were performed with initial conditions taken from an ensemble atmospheric reanalysis in which the observation data were assimilated. The observations reduced errors in initial conditions in the upper troposphere over the Arctic region, yielding more precise prediction of the locations and strengths of upper troughs and surface synoptic disturbances. Errors and uncertainties of predicted upper troughs at midlatitudes would be brought with upper level high potential vorticity (PV) intruding southward from the observed Arctic region. This is because the PV contained a "signal" of the additional Arctic observations as it moved along an isentropic surface. This suggests that a coordinated sustainable Arctic observing network would be effective not only for regional weather services but also for reducing weather risks in locations distant from the Arctic.

  14. The Incorporation and Initialization of Cloud Water/ice in AN Operational Forecast Model

    NASA Astrophysics Data System (ADS)

    Zhao, Qingyun

    Quantitative precipitation forecasts have been one of the weakest aspects of numerical weather prediction models. Theoretical studies show that the errors in precipitation calculation can arise from three sources: errors in the large-scale forecasts of primary variables, errors in the crude treatment of condensation/evaporation and precipitation processes, and errors in the model initial conditions. A new precipitation parameterization scheme has been developed to investigate the forecast value of improved precipitation physics via the introduction of cloud water and cloud ice into a numerical prediction model. The main feature of this scheme is the explicit calculation of cloud water and cloud ice in both the convective and stratiform precipitation parameterization. This scheme has been applied to the eta model at the National Meteorological Center. Four extensive tests have been performed. The statistical results showed a significant improvement in the model precipitation forecasts. Diagnostic studies suggest that the inclusion of cloud ice is important in transferring water vapor to precipitation and in the enhancement of latent heat release; the latter subsequently affects the vertical motion field significantly. Since three-dimensional cloud data is absent from the analysis/assimilation system for most numerical models, a method has been proposed to incorporate observed precipitation and nephanalysis data into the data assimilation system to obtain the initial cloud field for the eta model. In this scheme, the initial moisture and vertical motion fields are also improved at the same time as cloud initialization. The physical initialization is performed in a dynamical initialization framework that uses the Newtonian dynamical relaxation method to nudge the model's wind and mass fields toward analyses during a 12-hour data assimilation period. Results from a case study showed that a realistic cloud field was produced by this method at the end of the data assimilation period. Precipitation forecasts have been significantly improved as a result of the improved initial cloud, moisture and vertical motion fields.

  15. An improved simulation of the 2015 El Niño event by optimally correcting the initial conditions and model parameters in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan

    2017-09-01

    Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  17. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.

  18. Updating of aversive memories after temporal error detection is differentially modulated by mTOR across development

    PubMed Central

    Tallot, Lucille; Diaz-Mataix, Lorenzo; Perry, Rosemarie E.; Wood, Kira; LeDoux, Joseph E.; Mouly, Anne-Marie; Sullivan, Regina M.; Doyère, Valérie

    2017-01-01

    The updating of a memory is triggered whenever it is reactivated and a mismatch from what is expected (i.e., prediction error) is detected, a process that can be unraveled through the memory's sensitivity to protein synthesis inhibitors (i.e., reconsolidation). As noted in previous studies, in Pavlovian threat/aversive conditioning in adult rats, prediction error detection and its associated protein synthesis-dependent reconsolidation can be triggered by reactivating the memory with the conditioned stimulus (CS), but without the unconditioned stimulus (US), or by presenting a CS–US pairing with a different CS–US interval than during the initial learning. Whether similar mechanisms underlie memory updating in the young is not known. Using similar paradigms with rapamycin (an mTORC1 inhibitor), we show that preweaning rats (PN18–20) do form a long-term memory of the CS–US interval, and detect a 10-sec versus 30-sec temporal prediction error. However, the resulting updating/reconsolidation processes become adult-like after adolescence (PN30–40). Our results thus show that while temporal prediction error detection exists in preweaning rats, specific infant-type mechanisms are at play for associative learning and memory. PMID:28202715

  19. Effects of upstream-biased third-order space correction terms on multidimensional Crowley advection schemes

    NASA Technical Reports Server (NTRS)

    Schlesinger, R. E.

    1985-01-01

    The impact of upstream-biased corrections for third-order spatial truncation error on the stability and phase error of the two-dimensional Crowley combined advective scheme with the cross-space term included is analyzed, putting primary emphasis on phase error reduction. The various versions of the Crowley scheme are formally defined, and their stability and phase error characteristics are intercompared using a linear Fourier component analysis patterned after Fromm (1968, 1969). The performances of the schemes under prototype simulation conditions are tested using time-dependent numerical experiments which advect an initially cone-shaped passive scalar distribution in each of three steady nondivergent flows. One such flow is solid rotation, while the other two are diagonal uniform flow and a strongly deformational vortex.

  20. Constraining biosphere CO2 flux at regional scale with WRF-CO2 4DVar assimilation system

    NASA Astrophysics Data System (ADS)

    Zheng, T.

    2017-12-01

    The WRF-CO2 4DVar assimilation system is updated to include (1) operators for tower based observations (2) chemistry initial and boundary condition in the state vector (3) mechanism for aggregation from simulation model grid to state vector space. The update system is first tested with synthetic data to ensure its accuracy. The system is then used to test regional scale CO2 inversion at MCI (Midcontinental intensive) sites where CO2 mole fraction data were collected at multiple high towers during 2007-2008. The model domain is set to center on Iowa and include 8 towers within its boundary, and it is of 12x12km horizontal grid spacing. First, the relative impacts of the initial and boundary condition are assessed by the system's adjoint model. This is done with 24, 48, 72 hour time span. Second, we assessed the impacts of the transport error, including the misrepresentation of the boundary layer and cumulus activities. Third, we evaluated the different aggregation approach from the native model grid to the control variables (including scaling factors for flux, initial and boundary conditions). Four, we assessed the inversion performance using CO2 observation with different time-interval, and from different tower levels. We also examined the appropriate treatment of the background and observation error covariance in relation with these varying observation data sets.

  1. Adjoint Sensitivity Analyses Of Sand And Dust Storms In East Asia

    NASA Astrophysics Data System (ADS)

    Kay, J.; Kim, H.

    2008-12-01

    Sand and Dust Storm (SDS) in East Asia, so called Asian dust, is a seasonal meteorological phenomenon. Mostly in spring, dust particles blown into atmosphere in the arid area over northern China desert and Manchuria are transported to East Asia by prevailing flows. Three SDS events in East Asia from 2005 to 2008 are chosen to investigate how sensitive the SDS forecasts to the initial condition uncertainties and thence to suggest the sensitive regions for adaptive observations of the SDS events. Adaptive observations are additional observations in sensitive regions where the observations may have the most impact on the forecast by decreasing the forecast error. Three SDS events are chosen to represent different transport passes from the dust source regions to the Korean peninsula. To investigate the sensitivities to the initial condition, adjoint sensitivities that calculate gradient of the forecast aspect (i.e., response function) with respect to the initial condition are used. The forecast aspects relevant to the SDS transport are forecast error of the surface pressure, surface pressure perturbation, and steering vector of winds in the lower troposphere. Because the surface low pressure system usually plays an important role for SDS transport, the forecast error of the surface pressure and the surface pressure perturbation are chosen as the response function of the adjoint calculation. Another response function relevant to SDS transport is the steering flow over the downstream region (i.e., Korean peninsula) because direction and intensity of the prevailing winds usually determine the intensity and occurrence of the SDS events at the destination. The results show that the sensitive regions for the forecast error of the surface pressure and surface pressure perturbation are initially located in the vicinity of the trough and then propagate eastward as the low system moves eastward. The vertical structures of the adjoint sensitivities are upshear tilted structures, which are typical structures of extratropical cyclones. The adjoint sensitivities for lower tropospheric steering flow are also located near the trough, which confirms that the accurate forecast on the location and movement of the trough is essential to have better forecasts of Asian dust events. More comprehensive results and discussions of the adjoint sensitivity analyses for Asian dust events will be presented in the meeting.

  2. An iterative truncation method for unbounded electromagnetic problems using varying order finite elements

    NASA Astrophysics Data System (ADS)

    Paul, Prakash

    2009-12-01

    The finite element method (FEM) is used to solve three-dimensional electromagnetic scattering and radiation problems. Finite element (FE) solutions of this kind contain two main types of error: discretization error and boundary error. Discretization error depends on the number of free parameters used to model the problem, and on how effectively these parameters are distributed throughout the problem space. To reduce the discretization error, the polynomial order of the finite elements is increased, either uniformly over the problem domain or selectively in those areas with the poorest solution quality. Boundary error arises from the condition applied to the boundary that is used to truncate the computational domain. To reduce the boundary error, an iterative absorbing boundary condition (IABC) is implemented. The IABC starts with an inexpensive boundary condition and gradually improves the quality of the boundary condition as the iteration continues. An automatic error control (AEC) is implemented to balance the two types of error. With the AEC, the boundary condition is improved when the discretization error has fallen to a low enough level to make this worth doing. The AEC has these characteristics: (i) it uses a very inexpensive truncation method initially; (ii) it allows the truncation boundary to be very close to the scatterer/radiator; (iii) it puts more computational effort on the parts of the problem domain where it is most needed; and (iv) it can provide as accurate a solution as needed depending on the computational price one is willing to pay. To further reduce the computational cost, disjoint scatterers and radiators that are relatively far from each other are bounded separately and solved using a multi-region method (MRM), which leads to savings in computational cost. A simple analytical way to decide whether the MRM or the single region method will be computationally cheaper is also described. To validate the accuracy and savings in computation time, different shaped metallic and dielectric obstacles (spheres, ogives, cube, flat plate, multi-layer slab etc.) are used for the scattering problems. For the radiation problems, waveguide excited antennas (horn antenna, waveguide with flange, microstrip patch antenna) are used. Using the AEC the peak reduction in computation time during the iteration is typically a factor of 2, compared to the IABC using the same element orders throughout. In some cases, it can be as high as a factor of 4.

  3. Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability

    NASA Astrophysics Data System (ADS)

    Kar, Soummya; Moura, José M. F.

    2011-04-01

    The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.

  4. Chamber measurement of surface-atmosphere trace gas exchange: Numerical evaluation of dependence on soil, interfacial layer, and source/sink properties

    NASA Astrophysics Data System (ADS)

    Hutchinson, G. L.; Livingston, G. P.; Healy, R. W.; Striegl, R. G.

    2000-04-01

    We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere trace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulations showed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steady-state chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.

  5. Chamber measurement of surface-atmosphere trace gas exchange--Numerical evaluation of dependence on soil interfacial layer, and source/sink products

    USGS Publications Warehouse

    Hutchinson, G.L.; Livingston, G.P.; Healy, R.W.; Striegl, Robert G.

    2000-01-01

    We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere tace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulationshowed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steadystate chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.

  6. A computational intelligence approach to the Mars Precision Landing problem

    NASA Astrophysics Data System (ADS)

    Birge, Brian Kent, III

    Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over the regular PSO algorithm, allowing tracking of nonstationary error functions is detailed. Continued refinement of PSO in the larger research community comes from attempts to understand human-human social interaction as well as analysis of the emergent behavior. Using PSO and the parafoil scenario, optimized reference trajectories are created for an initial condition set of 76 states, representing the convex hull of 2001 states from an early Monte Carlo analysis. The controls are a set series of bank angles followed by a set series of 3DOF thrust vectoring. The reference trajectories are used to train an Artificial Neural Network Reference Trajectory Generator (ANNTraG), with the (marginal) ability to generalize a trajectory from initial conditions it has never been presented. The controls here allow continuous change in bank angle as well as thrust vector. The optimized reference trajectories represent the best achievable trajectory given the initial condition. Steps toward a closed loop neural controller with online learning updates are examined. The inner loop of the simulation employs the Program to Optimize Simulated Trajectories (POST) as the basic model, containing baseline dynamics and state generation. This is controlled from a MATLAB shell that directs the optimization, learning, and control strategy. Using mainly bank angle guidance coupled with CI strategies, the set of achievable reference trajectories are shown to be 88% under 10 meters, a significant improvement in the state of the art. Further, the automatic real-time generation of realistic reference trajectories in the presence of unknown initial conditions is shown to have promise. The closed loop CI guidance strategy is outlined. An unexpected advance came from the effort to optimize the optimization, where the PSO algorithm was improved with the capability for tracking a changing error environment.

  7. Application and Evaluation of MODIS LAI, FPAR, and Albedo Products in the WRF/CMAQ System

    EPA Science Inventory

    MODIS vegetation and albedo products provide a more realistic representation of surface conditions for input to the WRF/CMAQ modeling system. However, the initial evaluation of ingesting MODIS data into the system showed mixed results, with increased bias and error for 2-m temper...

  8. Calibration of hyperspectral data aviation mode according with accompanying ground-based measurements of standard surfaces of observed scenes

    NASA Astrophysics Data System (ADS)

    Ostrikov, V. N.; Plakhotnikov, O. V.

    2014-12-01

    Using considerable experimental material, we examine whether it is possible to recalculate the initial data of hyperspectral aircraft survey into spectral radiance factors (SRF). The errors of external calibration for various observation conditions and different instruments for data receiving are estimated.

  9. Dynamic analysis of Apollo-Salyut/Soyuz docking

    NASA Technical Reports Server (NTRS)

    Schliesing, J. A.

    1972-01-01

    The use of a docking-system computer program in analyzing the dynamic environment produced by two impacting spacecraft and the attitude control systems is discussed. Performance studies were conducted to determine the mechanism load and capture sensitivity to parametric changes in the initial impact conditions. As indicated by the studies, capture latching is most sensitive to vehicle angular-alinement errors and is least sensitive to lateral-miss error. As proved by load-sensitivity studies, peak loads acting on the Apollo spacecraft are considerably lower than the Apollo design-limit loads.

  10. Verification of Advances in a Coupled Snow-runoff Modeling Framework for Operational Streamflow Forecasts

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration's (NOAA's) River Forecast Centers (RFCs) issue hydrologic forecasts related to flood events, reservoir operations for water supply, streamflow regulation, and recreation on the nation's streams and rivers. The RFCs use the National Weather Service River Forecast System (NWSRFS) for streamflow forecasting which relies on a coupled snow model (i.e. SNOW17) and rainfall-runoff model (i.e. SAC-SMA) in snow-dominated regions of the US. Errors arise in various steps of the forecasting system from input data, model structure, model parameters, and initial states. The goal of the current study is to undertake verification of potential improvements in the SNOW17-SAC-SMA modeling framework developed for operational streamflow forecasts. We undertake verification for a range of parameters sets (i.e. RFC, DREAM (Differential Evolution Adaptive Metropolis)) as well as a data assimilation (DA) framework developed for the coupled models. Verification is also undertaken for various initial conditions to observe the influence of variability in initial conditions on the forecast. The study basin is the North Fork America River Basin (NFARB) located on the western side of the Sierra Nevada Mountains in northern California. Hindcasts are verified using both deterministic (i.e. Nash Sutcliffe efficiency, root mean square error, and joint distribution) and probabilistic (i.e. reliability diagram, discrimination diagram, containing ratio, and Quantile plots) statistics. Our presentation includes comparison of the performance of different optimized parameters and the DA framework as well as assessment of the impact associated with the initial conditions used for streamflow forecasts for the NFARB.

  11. User's manual for MacPASCO

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Davis, R. C.

    1992-01-01

    A user's manual is presented for MacPASCO, which is an interactive, graphic, preprocessor for panel design. MacPASCO creates input for PASCO, an existing computer code for structural analysis and sizing of longitudinally stiffened composite panels. MacPASCO provides a graphical user interface which simplifies the specification of panel geometry and reduces user input errors. The user draws the initial structural geometry and reduces user input errors. The user draws the initial structural geometry on the computer screen, then uses a combination of graphic and text inputs to: refine the structural geometry; specify information required for analysis such as panel load and boundary conditions; and define design variables and constraints for minimum mass optimization. Only the use of MacPASCO is described, since the use of PASCO has been documented elsewhere.

  12. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  13. Accurate initial conditions in mixed dark matter-baryon simulations

    NASA Astrophysics Data System (ADS)

    Valkenburg, Wessel; Villaescusa-Navarro, Francisco

    2017-06-01

    We quantify the error in the results of mixed baryon-dark-matter hydrodynamic simulations, stemming from outdated approximations for the generation of initial conditions. The error at redshift 0 in contemporary large simulations is of the order of few to 10 per cent in the power spectra of baryons and dark matter, and their combined total-matter power spectrum. After describing how to properly assign initial displacements and peculiar velocities to multiple species, we review several approximations: (1) using the total-matter power spectrum to compute displacements and peculiar velocities of both fluids, (2) scaling the linear redshift-zero power spectrum back to the initial power spectrum using the Newtonian growth factor ignoring homogeneous radiation, (3) using a mix of general-relativistic gauges so as to approximate Newtonian gravity, namely longitudinal-gauge velocities with synchronous-gauge densities and (4) ignoring the phase-difference in the Fourier modes for the offset baryon grid, relative to the dark-matter grid. Three of these approximations do not take into account that dark matter and baryons experience a scale-dependent growth after photon decoupling, which results in directions of velocity that are not the same as their direction of displacement. We compare the outcome of hydrodynamic simulations with these four approximations to our reference simulation, all setup with the same random seed and simulated using gadget-III.

  14. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    NASA Astrophysics Data System (ADS)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  15. Inverted initial conditions: Exploring the growth of cosmic structure and voids

    DOE PAGES

    Pontzen, Andrew; Roth, Nina; Peiris, Hiranya V.; ...

    2016-05-18

    We introduce and explore “paired” cosmological simulations. A pair consists of an A and B simulation with initial conditions related by the inversion δ A(x,t initial) = –δ B(x,t initial) (underdensities substituted for overdensities and vice versa). We argue that the technique is valuable for improving our understanding of cosmic structure formation. The A and B fields are by definition equally likely draws from ΛCDM initial conditions, and in the linear regime evolve identically up to the overall sign. As nonlinear evolution takes hold, a region that collapses to form a halo in simulation A will tend to expand tomore » create a void in simulation B. Applications include (i) contrasting the growth of A-halos and B-voids to test excursion-set theories of structure formation, (ii) cross-correlating the density field of the A and B universes as a novel test for perturbation theory, and (iii) canceling error terms by averaging power spectra between the two boxes. Furthermore, generalizations of the method to more elaborate field transformations are suggested.« less

  16. [A new method of processing quantitative PCR data].

    PubMed

    Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun

    2003-05-01

    Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.

  17. Improving Air Quality Forecasts with AURA Observations

    NASA Technical Reports Server (NTRS)

    Newchurch, M. J.; Biazer, A.; Khan, M.; Koshak, W. J.; Nair, U.; Fuller, K.; Wang, L.; Parker, Y.; Williams, R.; Liu, X.

    2008-01-01

    Past studies have identified model initial and boundary conditions as sources of reducible errors in air-quality simulations. In particular, improving the initial condition improves the accuracy of short-term forecasts as it allows for the impact of local emissions to be realized by the model and improving boundary conditions improves long range transport through the model domain, especially in recirculating anticyclones. During the August 2006 period, we use AURA/OMI ozone measurements along with MODIS and CALIPSO aerosol observations to improve the initial and boundary conditions of ozone and Particulate Matter. Assessment of the model by comparison of the control run and satellite assimilation run to the IONS06 network of ozonesonde observations, which comprise the densest ozone sounding campaign ever conducted in North America, to AURA/TES ozone profile measurements, and to the EPA ground network of ozone and PM measurements will show significant improvement in the CMAQ calculations that use AURA initial and boundary conditions. Further analyses of lightning occurrences from ground and satellite observations and AURA/OMI NO2 column abundances will identify the lightning NOx signal evident in OMI measurements and suggest pathways for incorporating the lightning and NO2 data into the CMAQ simulations.

  18. An effort to improve track and intensity prediction of tropical cyclones through vortex initialization in NCUM-global model

    NASA Astrophysics Data System (ADS)

    Singh, Vivek; Routray, A.; Mallick, Swapan; George, John P.; Rajagopal, E. N.

    2016-05-01

    Tropical cyclones (TCs) have strong impact on socio-economic conditions of the countries like India, Bangladesh and Myanmar owing to its awful devastating power. This brings in the need of precise forecasting system to predict the tracks and intensities of TCs accurately well in advance. However, it has been a great challenge for major operational meteorological centers over the years. Genesis of TCs over data sparse warm Tropical Ocean adds more difficulty to this. Weak and misplaced vortices at initial time are one of the prime sources of track and intensity errors in the Numerical Weather Prediction (NWP) models. Many previous studies have reported the forecast skill of track and intensity of TC improved due to the assimilation of satellite data along with vortex initialization (VI). Keeping this in mind, an attempt has been made to investigate the impact of vortex initialization for simulation of TC using UK-Met office global model, operational at NCMRWF (NCUM). This assessment is carried out by taking the case of a extremely severe cyclonic storm "Chapala" that occurred over Arabian Sea (AS) from 28th October to 3rd November 2015. Two numerical experiments viz. Vort-GTS (Assimilation of GTS observations with VI) and Vort-RAD (Same as Vort-GTS with assimilation of satellite data) are carried out. This vortex initialization study in NCUM model is first of its type over North Indian Ocean (NIO). The model simulation of TC is carried out with five different initial conditions through 24 hour cycles for both the experiments. The results indicate that the vortex initialization with assimilation of satellite data has a positive impact on the track and intensity forecast, landfall time and position error of the TCs.

  19. Seasonal simulations using a coupled ocean-atmosphere model with data assimilation

    NASA Astrophysics Data System (ADS)

    Larow, Timothy Edward

    1997-10-01

    A coupled ocean-atmosphere initialization scheme using Newtonian relaxation has been developed for the Florida State University coupled ocean-atmosphere global general circulation model. The coupled model is used for seasonal predictions of the boreal summers of 1987 and 1988. The atmosphere model is a modified version of the Florida State University global spectral model, resolution triangular truncation 42 waves. The ocean general circulation model consists of a slightly modified version developed by Latif (1987). Coupling is synchronous with exchange of information every two model hours. Using daily analysis from ECMWF and observed monthly mean SSTs from NCEP, two - one year, time dependent, Newtonian relaxation were conducted using the coupled model prior to the seasonal forecasts. Relaxation was selectively applied to the atmospheric vorticity, divergence, temperature, and dew point depression equations, and to the ocean's surface temperature equation. The ocean's initial conditions are from a six year ocean-only simulation which used observed wind stresses and a relaxation towards observed SSTs for forcings. Coupled initialization was conducted from 1 June 1986 to 1 June 1987 for the 1987 boreal forecast and from 1 June 1987 to 1 June 1988 for the 1988 boreal forecast. Examination of annual means of net heat flux, freshwater flux and wind stress obtained by from the initialization show close agreement with Oberhuber (1988) climatology and the Florida State University pseudo wind stress analysis. Sensitivity of the initialization/assimilation scheme was tested by conducting two - ten member ensemble integrations. Each member was integrated for 90 days (June-August) of the respective year. Initial conditions for the ensembles consisted of the same ocean state as used by the initialize forecasts, while the atmospheric initial conditions were from ECMWF analysis centered on 1 June of the respective year. Root mean square error and anomaly correlations between observed and forecasted SSTs in the Nino 3 and Nino 4 regions show greater skill between the initialized forecasts than the ensemble forecasts. It is hypothesized that differences in the specific humidity within the planetary boundary layer are responsible for the large SST errors noted with the ensembles.

  20. Dose-dependent effects of cannabis on the neural correlates of error monitoring in frequent cannabis users.

    PubMed

    Kowal, Mikael A; van Steenbergen, Henk; Colzato, Lorenza S; Hazekamp, Arno; van der Wee, Nic J A; Manai, Meriem; Durieux, Jeffrey; Hommel, Bernhard

    2015-11-01

    Cannabis has been suggested to impair the capacity to recognize discrepancies between expected and executed actions. However, there is a lack of conclusive evidence regarding the acute impact of cannabis on the neural correlates of error monitoring. In order to contribute to the available knowledge, we used a randomized, double-blind, between-groups design to investigate the impact of administration of a low (5.5 mg THC) or high (22 mg THC) dose of vaporized cannabis vs. placebo on the amplitudes of the error-related negativity (ERN) and error positivity (Pe) in the context of the Flanker task, in a group of frequent cannabis users (required to use cannabis minimally 4 times a week, for at least 2 years). Subjects in the high dose group (n=18) demonstrated a significantly diminished ERN in comparison to the placebo condition (n=19), whereas a reduced Pe amplitude was observed in both the high and low dose (n=18) conditions, as compared to placebo. The results suggest that a high dose of cannabis may affect the neural correlates of both the conscious (late), as well as the initial automatic processes involved in error monitoring, while a low dose of cannabis might impact only the conscious (late) processing of errors. Copyright © 2015 Elsevier B.V. and ECNP. All rights reserved.

  1. Consistent evaluation of an ultrasound-guided surgical navigation system by utilizing an active validation platform

    NASA Astrophysics Data System (ADS)

    Kim, Younsu; Kim, Sungmin; Boctor, Emad M.

    2017-03-01

    An ultrasound image-guided needle tracking systems have been widely used due to their cost-effectiveness and nonionizing radiation properties. Various surgical navigation systems have been developed by utilizing state-of-the-art sensor technologies. However, ultrasound transmission beam thickness causes unfair initial evaluation conditions due to inconsistent placement of the target with respect to the ultrasound probe. This inconsistency also brings high uncertainty and results in large standard deviations for each measurement when we compare accuracy with and without the guidance. To resolve this problem, we designed a complete evaluation platform by utilizing our mid-plane detection and time of flight measurement systems. The evaluating system uses a PZT element target and an ultrasound transmitting needle. In this paper, we evaluated an optical tracker-based surgical ultrasound-guided navigation system whereby the optical tracker tracks marker frames attached on the ultrasound probe and the needle. We performed ten needle trials of guidance experiment with a mid-plane adjustment algorithm and with a B-mode segmentation method. With the midplane adjustment, the result showed a mean error of 1.62+/-0.72mm. The mean error increased to 3.58+/-2.07mm without the mid-plane adjustment. Our evaluation system can reduce the effect of the beam-thickness problem, and measure ultrasound image-guided technologies consistently with a minimal standard deviation. Using our novel evaluation system, ultrasound image-guided technologies can be compared under equal initial conditions. Therefore, the error can be evaluated more accurately, and the system provides better analysis on the error sources such as ultrasound beam thickness.

  2. Effects of Moist Convection on Hurricane Predictability

    NASA Technical Reports Server (NTRS)

    Zhang, Fuqing; Sippel, Jason A.

    2008-01-01

    This study exemplifies inherent uncertainties in deterministic prediction of hurricane formation and intensity. Such uncertainties could ultimately limit the predictability of hurricanes at all time scales. In particular, this study highlights the predictability limit due to the effects on moist convection of initial-condition errors with amplitudes far smaller than those of any observation or analysis system. Not only can small and arguably unobservable differences in the initial conditions result in different routes to tropical cyclogenesis, but they can also determine whether or not a tropical disturbance will significantly develop. The details of how the initial vortex is built can depend on chaotic interactions of mesoscale features, such as cold pools from moist convection, whose timing and placement may significantly vary with minute initial differences. Inherent uncertainties in hurricane forecasts illustrate the need for developing advanced ensemble prediction systems to provide event-dependent probabilistic forecasts and risk assessment.

  3. The Investigation of Ghost Fluid Method for Simulating the Compressible Two-Medium Flow

    NASA Astrophysics Data System (ADS)

    Lu, Hai Tian; Zhao, Ning; Wang, Donghong

    2016-06-01

    In this paper, we investigate the conservation error of the two-dimensional compressible two-medium flow simulated by the front tracking method. As the improved versions of the original ghost fluid method, the modified ghost fluid method and the real ghost fluid method are selected to define the interface boundary conditions, respectively, to show different effects on the conservation error. A Riemann problem is constructed along the normal direction of the interface in the front tracking method, with the goal of obtaining an efficient procedure to track the explicit sharp interface precisely. The corresponding Riemann solutions are also used directly in these improved ghost fluid methods. Extensive numerical examples including the sod tube and the shock-bubble interaction are tested to calculate the conservation error. It is found that these two ghost fluid methods have distinctive performances for different initial conditions of the flow field, and the related conclusions are made to suggest the best choice for the combination.

  4. Statistical Properties of Differences between Low and High Resolution CMAQ Runs with Matched Initial and Boundary Conditions

    EPA Science Inventory

    The difficulty in assessing errors in numerical models of air quality is a major obstacle to improving their ability to predict and retrospectively map air quality. In this paper, using simulation outputs from the Community Multi-scale Air Quality Model (CMAQ), the statistic...

  5. Error free physically unclonable function with programmed resistive random access memory using reliable resistance states by specific identification-generation method

    NASA Astrophysics Data System (ADS)

    Tseng, Po-Hao; Hsu, Kai-Chieh; Lin, Yu-Yu; Lee, Feng-Min; Lee, Ming-Hsiu; Lung, Hsiang-Lan; Hsieh, Kuang-Yeu; Chung Wang, Keh; Lu, Chih-Yuan

    2018-04-01

    A high performance physically unclonable function (PUF) implemented with WO3 resistive random access memory (ReRAM) is presented in this paper. This robust ReRAM-PUF can eliminated bit flipping problem at very high temperature (up to 250 °C) due to plentiful read margin by using initial resistance state and set resistance state. It is also promised 10 years retention at the temperature range of 210 °C. These two stable resistance states enable stable operation at automotive environments from -40 to 125 °C without need of temperature compensation circuit. The high uniqueness of PUF can be achieved by implementing a proposed identification (ID)-generation method. Optimized forming condition can move 50% of the cells to low resistance state and the remaining 50% remain at initial high resistance state. The inter- and intra-PUF evaluations with unlimited separation of hamming distance (HD) are successfully demonstrated even under the corner condition. The number of reproduction was measured to exceed 107 times with 0% bit error rate (BER) at read voltage from 0.4 to 0.7 V.

  6. Disruption of State Estimation in the Human Lateral Cerebellum

    PubMed Central

    Miall, R. Chris; Christensen, Lars O. D; Cain, Owen; Stanley, James

    2007-01-01

    The cerebellum has been proposed to be a crucial component in the state estimation process that combines information from motor efferent and sensory afferent signals to produce a representation of the current state of the motor system. Such a state estimate of the moving human arm would be expected to be used when the arm is rapidly and skillfully reaching to a target. We now report the effects of transcranial magnetic stimulation (TMS) over the ipsilateral cerebellum as healthy humans were made to interrupt a slow voluntary movement to rapidly reach towards a visually defined target. Errors in the initial direction and in the final finger position of this reach-to-target movement were significantly higher for cerebellar stimulation than they were in control conditions. The average directional errors in the cerebellar TMS condition were consistent with the reaching movements being planned and initiated from an estimated hand position that was 138 ms out of date. We suggest that these results demonstrate that the cerebellum is responsible for estimating the hand position over this time interval and that TMS disrupts this state estimate. PMID:18044990

  7. Adaptive estimation of nonlinear parameters of a nonholonomic spherical robot using a modified fuzzy-based speed gradient algorithm

    NASA Astrophysics Data System (ADS)

    Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa

    2017-05-01

    This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.

  8. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  9. A Comparative Study of Interval Management Control Law Capabilities

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Smith, Colin L.; Palmer, Susan O.; Abbott, Terence S.

    2012-01-01

    This paper presents a new tool designed to allow for rapid development and testing of different control algorithms for airborne spacing. This tool, Interval Management Modeling and Spacing Tool (IM MAST), is a fast-time, low-fidelity tool created to model the approach of aircraft to a runway, with a focus on their interactions with each other. Errors can be induced between pairs of aircraft by varying initial positions, winds, speed profiles, and altitude profiles. Results to-date show that only a few of the algorithms tested had poor behavior in the arrival and approach environment. The majority of the algorithms showed only minimal variation in performance under the test conditions. Trajectory-based algorithms showed high susceptibility to wind forecast errors, while performing marginally better than the other algorithms under other conditions. Trajectory-based algorithms have a sizable advantage, however, of being able to perform relative spacing operations between aircraft on different arrival routes and flight profiles without employing ghosting. methods. This comes at the higher cost of substantially increased complexity, however. Additionally, it was shown that earlier initiation of relative spacing operations provided more time for corrections to be made without any significant problems in the spacing operation itself. Initiating spacing farther out, however, would require more of the aircraft to begin spacing before they merge onto a common route.

  10. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning

    PubMed Central

    Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours. PMID:26963919

  11. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning.

    PubMed

    Arrighi, Pieranna; Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno; Andre, Paolo

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours.

  12. Biogenic isoprene emissions driven by regional weather predictions using different initialization methods: case studies during the SEAC4RS and DISCOVER-AQ airborne campaigns

    NASA Astrophysics Data System (ADS)

    Huang, Min; Carmichael, Gregory R.; Crawford, James H.; Wisthaler, Armin; Zhan, Xiwu; Hain, Christopher R.; Lee, Pius; Guenther, Alex B.

    2017-08-01

    Land and atmospheric initial conditions of the Weather Research and Forecasting (WRF) model are often interpolated from a different model output. We perform case studies during NASA's SEAC4RS and DISCOVER-AQ Houston airborne campaigns, demonstrating that using land initial conditions directly downscaled from a coarser resolution dataset led to significant positive biases in the coupled NASA-Unified WRF (NUWRF, version 7) surface and near-surface air temperature and planetary boundary layer height (PBLH) around the Missouri Ozarks and Houston, Texas, as well as poorly partitioned latent and sensible heat fluxes. Replacing land initial conditions with the output from a long-term offline Land Information System (LIS) simulation can effectively reduce the positive biases in NUWRF surface air temperature by ˜ 2 °C. We also show that the LIS land initialization can modify surface air temperature errors almost 10 times as effectively as applying a different atmospheric initialization method. The LIS-NUWRF-based isoprene emission calculations by the Model of Emissions of Gases and Aerosols from Nature (MEGAN, version 2.1) are at least 20 % lower than those computed using the coarser resolution data-initialized NUWRF run, and are closer to aircraft-observation-derived emissions. Higher resolution MEGAN calculations are prone to amplified discrepancies with aircraft-observation-derived emissions on small scales. This is possibly a result of some limitations of MEGAN's parameterization and uncertainty in its inputs on small scales, as well as the representation error and the neglect of horizontal transport in deriving emissions from aircraft data. This study emphasizes the importance of proper land initialization to the coupled atmospheric weather modeling and the follow-on emission modeling. We anticipate it to also be critical to accurately representing other processes included in air quality modeling and chemical data assimilation. Having more confidence in the weather inputs is also beneficial for determining and quantifying the other sources of uncertainties (e.g., parameterization, other input data) of the models that they drive.

  13. Sources of uncertainty in the quantification of genetically modified oilseed rape contamination in seed lots.

    PubMed

    Begg, Graham S; Cullen, Danny W; Iannetta, Pietro P M; Squire, Geoff R

    2007-02-01

    Testing of seed and grain lots is essential in the enforcement of GM labelling legislation and needs reliable procedures for which associated errors have been identified and minimised. In this paper we consider the testing of oilseed rape seed lots obtained from the harvest of a non-GM crop known to be contaminated by volunteer plants from a GM herbicide tolerant variety. The objective was to identify and quantify the error associated with the testing of these lots from the initial sampling to completion of the real-time PCR assay with which the level of GM contamination was quantified. The results showed that, under the controlled conditions of a single laboratory, the error associated with the real-time PCR assay to be negligible in comparison with sampling error, which was exacerbated by heterogeneity in the distribution of GM seeds, most notably at a small scale, i.e. 25 cm3. Sampling error was reduced by one to two thirds on the application of appropriate homogenisation procedures.

  14. Healing assessment of tile sets for error tolerance in DNA self-assembly.

    PubMed

    Hashempour, M; Mashreghian Arani, Z; Lombardi, F

    2008-12-01

    An assessment of the effectiveness of healing for error tolerance in DNA self-assembly tile sets for algorithmic/nano-manufacturing applications is presented. Initially, the conditions for correct binding of a tile to an existing aggregate are analysed using a Markovian approach; based on this analysis, it is proved that correct aggregation (as identified with a so-called ideal tile set) is not always met for the existing tile sets for nano-manufacturing. A metric for assessing tile sets for healing by utilising punctures is proposed. Tile sets are investigated and assessed with respect to features such as error (mismatched tile) movement, punctured area and bond types. Subsequently, it is shown that the proposed metric can comprehensively assess the healing effectiveness of a puncture type for a tile set and its capability to attain error tolerance for the desired pattern. Extensive simulation results are provided.

  15. Prediction error and trace dominance determine the fate of fear memories after post-training manipulations

    PubMed Central

    Alfei, Joaquín M.; Ferrer Monti, Roque I.; Molina, Victor A.; Bueno, Adrián M.

    2015-01-01

    Different mnemonic outcomes have been observed when associative memories are reactivated by CS exposure and followed by amnestics. These outcomes include mere retrieval, destabilization–reconsolidation, a transitional period (which is insensitive to amnestics), and extinction learning. However, little is known about the interaction between initial learning conditions and these outcomes during a reinforced or nonreinforced reactivation. Here we systematically combined temporally specific memories with different reactivation parameters to observe whether these four outcomes are determined by the conditions established during training. First, we validated two training regimens with different temporal expectations about US arrival. Then, using Midazolam (MDZ) as an amnestic agent, fear memories in both learning conditions were submitted to retraining either under identical or different parameters to the original training. Destabilization (i.e., susceptibly to MDZ) occurred when reactivation was reinforced, provided the occurrence of a temporal prediction error about US arrival. In subsequent experiments, both treatments were systematically reactivated by nonreinforced context exposure of different lengths, which allowed to explore the interaction between training and reactivation lengths. These results suggest that temporal prediction error and trace dominance determine the extent to which reactivation produces the different outcomes. PMID:26179232

  16. Method for solving the problem of nonlinear heating a cylindrical body with unknown initial temperature

    NASA Astrophysics Data System (ADS)

    Yaparova, N.

    2017-10-01

    We consider the problem of heating a cylindrical body with an internal thermal source when the main characteristics of the material such as specific heat, thermal conductivity and material density depend on the temperature at each point of the body. We can control the surface temperature and the heat flow from the surface inside the cylinder, but it is impossible to measure the temperature on axis and the initial temperature in the entire body. This problem is associated with the temperature measurement challenge and appears in non-destructive testing, in thermal monitoring of heat treatment and technical diagnostics of operating equipment. The mathematical model of heating is represented as nonlinear parabolic PDE with the unknown initial condition. In this problem, both the Dirichlet and Neumann boundary conditions are given and it is required to calculate the temperature values at the internal points of the body. To solve this problem, we propose the numerical method based on using of finite-difference equations and a regularization technique. The computational scheme involves solving the problem at each spatial step. As a result, we obtain the temperature function at each internal point of the cylinder beginning from the surface down to the axis. The application of the regularization technique ensures the stability of the scheme and allows us to significantly simplify the computational procedure. We investigate the stability of the computational scheme and prove the dependence of the stability on the discretization steps and error level of the measurement results. To obtain the experimental temperature error estimates, computational experiments were carried out. The computational results are consistent with the theoretical error estimates and confirm the efficiency and reliability of the proposed computational scheme.

  17. A Comparison of Two Prompting Procedures for Teaching Basic Skills to Children with Autism

    ERIC Educational Resources Information Center

    Fentress, Genevieve M.; Lerman, Dorothea C.

    2012-01-01

    We compared two prompting techniques that are commonly used to teach individuals with autism. In the "most-to-least" (MTL) prompting condition, the therapist initially delivered the most intrusive prompt necessary to achieve a correct response. Prompts were gradually faded across subsequent trials, while errors resulted in the provision of…

  18. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.

    The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less

  19. Introduction to CAUSES: Description of weather and climate models and their near-surface temperature errors in 5-day hindcasts near the Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morcrette, Cyril J.; Van Weverberg, Kwinten; Ma, H

    2018-02-16

    The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less

  20. Errors in Aviation Decision Making: Bad Decisions or Bad Luck?

    NASA Technical Reports Server (NTRS)

    Orasanu, Judith; Martin, Lynne; Davison, Jeannie; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Despite efforts to design systems and procedures to support 'correct' and safe operations in aviation, errors in human judgment still occur and contribute to accidents. In this paper we examine how an NDM (naturalistic decision making) approach might help us to understand the role of decision processes in negative outcomes. Our strategy was to examine a collection of identified decision errors through the lens of an aviation decision process model and to search for common patterns. The second, and more difficult, task was to determine what might account for those patterns. The corpus we analyzed consisted of tactical decision errors identified by the NTSB (National Transportation Safety Board) from a set of accidents in which crew behavior contributed to the accident. A common pattern emerged: about three quarters of the errors represented plan-continuation errors, that is, a decision to continue with the original plan despite cues that suggested changing the course of action. Features in the context that might contribute to these errors were identified: (a) ambiguous dynamic conditions and (b) organizational and socially-induced goal conflicts. We hypothesize that 'errors' are mediated by underestimation of risk and failure to analyze the potential consequences of continuing with the initial plan. Stressors may further contribute to these effects. Suggestions for improving performance in these error-inducing contexts are discussed.

  1. Barriers and facilitators to recovering from e-prescribing errors in community pharmacies.

    PubMed

    Odukoya, Olufunmilola K; Stone, Jamie A; Chui, Michelle A

    2015-01-01

    To explore barriers and facilitators to recovery from e-prescribing errors in community pharmacies and to explore practical solutions for work system redesign to ensure successful recovery from errors. Cross-sectional qualitative design using direct observations, interviews, and focus groups. Five community pharmacies in Wisconsin. 13 pharmacists and 14 pharmacy technicians. Observational field notes and transcribed interviews and focus groups were subjected to thematic analysis guided by the Systems Engineering Initiative for Patient Safety (SEIPS) work system and patient safety model. Barriers and facilitators to recovering from e-prescription errors in community pharmacies. Organizational factors, such as communication, training, teamwork, and staffing levels, play an important role in recovering from e-prescription errors. Other factors that could positively or negatively affect recovery of e-prescription errors include level of experience, knowledge of the pharmacy personnel, availability or usability of tools and technology, interruptions and time pressure when performing tasks, and noise in the physical environment. The SEIPS model sheds light on key factors that may influence recovery from e-prescribing errors in pharmacies, including the environment, teamwork, communication, technology, tasks, and other organizational variables. To be successful in recovering from e-prescribing errors, pharmacies must provide the appropriate working conditions that support recovery from errors.

  2. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  3. Effect of thematic map misclassification on landscape multi-metric assessment.

    PubMed

    Kleindl, William J; Powell, Scott L; Hauer, F Richard

    2015-06-01

    Advancements in remote sensing and computational tools have increased our awareness of large-scale environmental problems, thereby creating a need for monitoring, assessment, and management at these scales. Over the last decade, several watershed and regional multi-metric indices have been developed to assist decision-makers with planning actions of these scales. However, these tools use remote-sensing products that are subject to land-cover misclassification, and these errors are rarely incorporated in the assessment results. Here, we examined the sensitivity of a landscape-scale multi-metric index (MMI) to error from thematic land-cover misclassification and the implications of this uncertainty for resource management decisions. Through a case study, we used a simplified floodplain MMI assessment tool, whose metrics were derived from Landsat thematic maps, to initially provide results that were naive to thematic misclassification error. Using a Monte Carlo simulation model, we then incorporated map misclassification error into our MMI, resulting in four important conclusions: (1) each metric had a different sensitivity to error; (2) within each metric, the bias between the error-naive metric scores and simulated scores that incorporate potential error varied in magnitude and direction depending on the underlying land cover at each assessment site; (3) collectively, when the metrics were combined into a multi-metric index, the effects were attenuated; and (4) the index bias indicated that our naive assessment model may overestimate floodplain condition of sites with limited human impacts and, to a lesser extent, either over- or underestimated floodplain condition of sites with mixed land use.

  4. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  5. Adaptive use of research aircraft data sets for hurricane forecasts

    NASA Astrophysics Data System (ADS)

    Biswas, M. K.; Krishnamurti, T. N.

    2008-02-01

    This study uses an adaptive observational strategy for hurricane forecasting. It shows the impacts of Lidar Atmospheric Sensing Experiment (LASE) and dropsonde data sets from Convection and Moisture Experiment (CAMEX) field campaigns on hurricane track and intensity forecasts. The following cases are used in this study: Bonnie, Danielle and Georges of 1998 and Erin, Gabrielle and Humberto of 2001. A single model run for each storm is carried out using the Florida State University Global Spectral Model (FSUGSM) with the European Center for Medium Range Weather Forecasts (ECMWF) analysis as initial conditions, in addition to 50 other model runs where the analysis is randomly perturbed for each storm. The centers of maximum variance of the DLM heights are located from the forecast error variance fields at the 84-hr forecast. Back correlations are then performed using the centers of these maximum variances and the fields at the 36-hr forecast. The regions having the highest correlations in the vicinity of the hurricanes are indicative of regions from where the error growth emanates and suggests the need for additional observations. Data sets are next assimilated in those areas that contain high correlations. Forecasts are computed using the new initial conditions for the storm cases, and track and intensity skills are then examined with respect to the control forecast. The adaptive strategy is capable of identifying sensitive areas where additional observations can help in reducing the hurricane track forecast errors. A reduction of position error by approximately 52% for day 3 of forecast (averaged over 7 storm cases) over the control runs is observed. The intensity forecast shows only a slight positive impact due to the model’s coarse resolution.

  6. Influence of precision of emission characteristic parameters on model prediction error of VOCs/formaldehyde from dry building material.

    PubMed

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.

  7. Sustained attention performance during sleep deprivation: evidence of state instability

    NASA Technical Reports Server (NTRS)

    Doran, S. M.; Van Dongen, H. P.; Dinges, D. F.

    2001-01-01

    Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the "state instability" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.

  8. Application of Improved 5th-Cubature Kalman Filter in Initial Strapdown Inertial Navigation System Alignment for Large Misalignment Angles.

    PubMed

    Wang, Wei; Chen, Xiyuan

    2018-02-23

    In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm.

  9. Robust Huber-based iterated divided difference filtering with application to cooperative localization of autonomous underwater vehicles.

    PubMed

    Gao, Wei; Liu, Yalong; Xu, Bo

    2014-12-19

    A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.

  10. Analysis of Choice Stepping with Visual Interference Can Detect Prolonged Postural Preparation in Older Adults with Mild Cognitive Impairment at High Risk of Falling.

    PubMed

    Uemura, Kazuki; Hasegawa, Takashi; Tougou, Hiroki; Shuhei, Takahashi; Uchiyama, Yasushi

    2015-01-01

    We aimed to clarify postural control deficits in older adults with mild cognitive impairment (MCI) at high risk of falling by addressing the inhibitory process. This study involved 376 community-dwelling older adults with MCI. Participants were instructed to execute forward stepping on the side indicated by the central arrow while ignoring the 2 flanking arrows on each side (→→→→→, congruent, or →→←→→, incongruent). Initial weight transfer direction errors [anticipatory postural adjustment (APA) errors], step execution times, and divided phases (reaction, APA, and swing phases) were measured from vertical force data. Participants were categorized as fallers (n = 37) and non-fallers (n = 339) based on fall experiences in the last 12 months. There were no differences in the step execution times, swing phases, step error rates, and APA error rates between groups, but fallers had a significantly longer APA phase relative to non-fallers in trials of the incongruent condition with APA errors (p = 0.005). Fallers also had a longer reaction phase in trials with the correct APA, regardless of the condition (p = 0.01). Analyses of choice stepping with visual interference can detect prolonged postural preparation as a specific falling-associated deficit in older adults with MCI. © 2015 S. Karger AG, Basel.

  11. Forecast errors in dust vertical distributions over Rome (Italy): Multiple particle size representation and cloud contributions

    NASA Astrophysics Data System (ADS)

    Kishcha, P.; Alpert, P.; Shtivelman, A.; Krichak, S. O.; Joseph, J. H.; Kallos, G.; Katsafados, P.; Spyrou, C.; Gobbi, G. P.; Barnaba, F.; Nickovic, S.; PéRez, C.; Baldasano, J. M.

    2007-08-01

    In this study, forecast errors in dust vertical distributions were analyzed. This was carried out by using quantitative comparisons between dust vertical profiles retrieved from lidar measurements over Rome, Italy, performed from 2001 to 2003, and those predicted by models. Three models were used: the four-particle-size Dust Regional Atmospheric Model (DREAM), the older one-particle-size version of the SKIRON model from the University of Athens (UOA), and the pre-2006 one-particle-size Tel Aviv University (TAU) model. SKIRON and DREAM are initialized on a daily basis using the dust concentration from the previous forecast cycle, while the TAU model initialization is based on the Total Ozone Mapping Spectrometer aerosol index (TOMS AI). The quantitative comparison shows that (1) the use of four-particle-size bins in the dust modeling instead of only one-particle-size bins improves dust forecasts; (2) cloud presence could contribute to noticeable dust forecast errors in SKIRON and DREAM; and (3) as far as the TAU model is concerned, its forecast errors were mainly caused by technical problems with TOMS measurements from the Earth Probe satellite. As a result, dust forecast errors in the TAU model could be significant even under cloudless conditions. The DREAM versus lidar quantitative comparisons at different altitudes show that the model predictions are more accurate in the middle part of dust layers than in the top and bottom parts of dust layers.

  12. Effect of initial phase on error in electron energy obtained using paraxial approximation for a focused laser pulse in vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Kunwar Pal, E-mail: k-psingh@yahoo.com; Department of Physics, Shri Venkateshwara University, Gajraula, Amroha, Uttar Pradesh 244236; Arya, Rashmi

    2015-09-14

    We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarizedmore » laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.« less

  13. Two Topics in Seasonal Streamflow Forecasting: Soil Moisture Initialization Error and Precipitation Downscaling

    NASA Technical Reports Server (NTRS)

    Koster, Randal; Walker, Greg; Mahanama, Sarith; Reichle, Rolf

    2012-01-01

    Continental-scale offline simulations with a land surface model are used to address two important issues in the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which the downscaling of seasonal precipitation forecasts, if it could be done accurately, would improve streamflow forecasts. The reduction in streamflow forecast skill (with forecasted streamflow measured against observations) associated with adding noise to a soil moisture field is found to be, to first order, proportional to the average reduction in the accuracy of the soil moisture field itself. This result has implications for streamflow forecast improvement under satellite-based soil moisture measurement programs. In the second and more idealized ("perfect model") analysis, precipitation downscaling is found to have an impact on large-scale streamflow forecasts only if two conditions are met: (i) evaporation variance is significant relative to the precipitation variance, and (ii) the subgrid spatial variance of precipitation is adequately large. In the large-scale continental region studied (the conterminous United States), these two conditions are met in only a somewhat limited area.

  14. The effects of window shape and reticle presence on performance in a vertical alignment task

    NASA Technical Reports Server (NTRS)

    Rosenberg, Erika L.; Haines, Richard F.; Jordan, Kevin

    1989-01-01

    This study was conducted to evaluate the effect of selected interior work-station orientational cuing upon the ability to align a target image with local vertical in the frontal plane. Angular error from gravitational vertical in an alignment task was measured for 20 observers viewing through two window shapes (square, round), two initial orientations of a computer-generated space shuttle image, and the presence or absence of a stabilized optical alignment reticle. In terms of overall accuracy, it was found that observer error was significantly smaller for the square window and reticle-present conditions than for the round window and reticle-absent conditions. Response bias data reflected an overall tendency to undershoot and greater variability of response in the round window/no reticle condition. These results suggest that environmental cuing information, such as that provided by square window frames and alignment reticles, may aid in subjective orientation and increase accuracy of response in a Space Station proximity operations alignment task.

  15. Development and validity of an instrumented handbike: initial results of propulsion kinetics.

    PubMed

    van Drongelen, Stefan; van den Berg, Jos; Arnet, Ursina; Veeger, Dirkjan H E J; van der Woude, Lucas H V

    2011-11-01

    To develop an instrumented handbike system to measure the forces applied to the handgrip during handbiking. A 6 degrees of freedom force sensor was built into the handgrip of an attach-unit handbike, together with two optical encoders to measure the orientation of the handgrip and crank in space. Linearity, precision, and percent error were determined for static and dynamic tests. High linearity was demonstrated for both the static and the dynamic condition (r=1.01). Precision was high under the static condition (standard deviation of 0.2N), however the precision decreased with higher loads during the dynamic condition. Percent error values were between 0.3 and 5.1%. This is the first instrumented handbike system that can register 3-dimensional forces. It can be concluded that the instrumented handbike system allows for an accurate force analysis based on forces registered at the handle bars. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.

  17. Sensitivity of mesoscale-model forecast skill to some initial-data characteristics, data density, data position, analysis procedure and measurement error

    NASA Technical Reports Server (NTRS)

    Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.

    1989-01-01

    The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.

  18. Acquisition of an instrumental activity of daily living in patients with Korsakoff's syndrome: a comparison of trial and error and errorless learning.

    PubMed

    Oudman, Erik; Nijboer, Tanja C W; Postma, Albert; Wijnia, Jan W; Kerklaan, Sandra; Lindsen, Karen; Van der Stigchel, Stefan

    2013-01-01

    Patients with Korsakoff's syndrome show devastating amnesia and executive deficits. Consequently, the ability to perform instrumental activities such as making coffee is frequently diminished. It is currently unknown whether patients with Korsakoff's syndrome are able to (re)learn instrumental activities. A good candidate for an effective teaching technique in Korsakoff's syndrome is errorless learning as it is based on intact implicit memory functioning. Therefore, the aim of the current study was two-fold: to investigate whether patients with Korsakoff's syndrome are able to (re)learn instrumental activities, and to compare the effectiveness of errorless learning with trial and error learning in the acquisition and maintenance of an instrumental activity, namely using a washing machine to do the laundry. Whereas initial learning performance in the errorless learning condition was superior, both intervention techniques resulted in similar improvement over eight learning sessions. Moreover, performance in a different spatial layout showed a comparable improvement. Notably, in follow-up sessions starting after four weeks without practice, performance was still elevated in the errorless learning condition, but not in the trial and error condition. The current study demonstrates that (re)learning and maintenance of an instrumental activity is possible in patients with Korsakoff's syndrome.

  19. Analytical Evaluation of a Method of Midcourse Guidance for Rendezvous with Earth Satellites

    NASA Technical Reports Server (NTRS)

    Eggleston, John M.; Dunning, Robert S.

    1961-01-01

    A digital-computer simulation was made of the midcourse or ascent phase of a rendezvous between a ferry vehicle and a space station. The simulation involved a closed-loop guidance system in which both the relative position and relative velocity between ferry and station are measured (by simulated radar) and the relative-velocity corrections required to null the miss distance are computed and applied. The results are used to study the effectiveness of a particular set of guidance equations and to study the effects of errors in the launch conditions and errors in the navigation data. A number of trajectories were investigated over a variety of initial conditions for cases in which the space station was in a circular orbit and also in an elliptic orbit. Trajectories are described in terms of a rotating coordinate system fixed in the station. As a result of this study the following conclusions are drawn. Successful rendezvous can be achieved even with launch conditions which are substantially less accurate than those obtained with present-day techniques. The average total-velocity correction required during the midcourse phase is directly proportional to the radar accuracy but the miss distance is not. Errors in the time of booster burnout or in the position of the ferry at booster burnout are less important than errors in the ferry velocity at booster burnout. The use of dead bands to account for errors in the navigational (radar) equipment appears to depend upon a compromise between the magnitude of the velocity corrections to be made and the allowable miss distance at the termination of the midcourse phase of the rendezvous. When approximate guidance equations are used, there are limits on their accuracy which are dependent on the angular distance about the earth to the expected point of rendezvous.

  20. Image-Guided Intraoperative Cortical Deformation Recovery Using Game Theory: Application to Neocortical Epilepsy Surgery

    PubMed Central

    DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.

    2010-01-01

    During neurosurgery, nonrigid brain deformation prevents preoperatively-acquired images from accurately depicting the intraoperative brain. Stereo vision systems can be used to track intraoperative cortical surface deformation and update preoperative brain images in conjunction with a biomechanical model. However, these stereo systems are often plagued with calibration error, which can corrupt the deformation estimation. In order to decouple the effects of camera calibration from the surface deformation estimation, a framework that can solve for disparate and often competing variables is needed. Game theory, which was developed to handle decision making in this type of competitive environment, has been applied to various fields from economics to biology. In this paper, game theory is applied to cortical surface tracking during neocortical epilepsy surgery and used to infer information about the physical processes of brain surface deformation and image acquisition. The method is successfully applied to eight in vivo cases, resulting in an 81% decrease in mean surface displacement error. This includes a case in which some of the initial camera calibration parameters had errors of 70%. Additionally, the advantages of using a game theoretic approach in neocortical epilepsy surgery are clearly demonstrated in its robustness to initial conditions. PMID:20129844

  1. Influence of Precision of Emission Characteristic Parameters on Model Prediction Error of VOCs/Formaldehyde from Dry Building Material

    PubMed Central

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C. PMID:24312497

  2. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  3. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  4. Differences between appetitive and aversive reinforcement on reorientation in a spatial working memory task.

    PubMed

    Golob, Edward J; Taube, Jeffrey S

    2002-10-17

    Tasks using appetitive reinforcers show that following disorientation rats use the shape of an arena to reorient, and cannot distinguish two geometrically similar corners to obtain a reward, despite the presence of a prominent visual cue that provides information to differentiate the two corners. Other studies show that disorientation impairs performance on certain appetitive, but not aversive, tasks. This study evaluated whether rats would make similar geometric errors in a working memory task that used aversive reinforcement. We hypothesized that in a task that used aversive reinforcement rats that were initially disoriented would not reorient by arena shape and thus make similar geometric errors. Tests were performed in a rectangular arena having one polarizing cue. In the appetitive condition water consumption was the reward. The aversive condition was a water maze task with reinforcement provided by escape to a hidden platform. In the aversive condition rats returned to the reinforced corner significantly more often than in the dry condition, and did not favor the diagonally opposite corner. Results show that rats can use cues besides arena shape to reorient in an aversive reinforcement condition. These findings may also reflect different strategies, with an escape/homing strategy in the wet condition and a foraging strategy in the dry condition.

  5. Error tracking control for underactuated overhead cranes against arbitrary initial payload swing angles

    NASA Astrophysics Data System (ADS)

    Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin

    2017-02-01

    This paper exploits an error tracking control method for overhead crane systems for which the error trajectories for the trolley and the payload swing can be pre-specified. The proposed method does not require that the initial payload swing angle remains zero, whereas this requirement is usually assumed in conventional methods. The significant feature of the proposed method is its superior control performance as well as its strong robustness over different or uncertain rope lengths, payload masses, desired positions, initial payload swing angles, and external disturbances. Owing to the same attenuation behavior, the desired error trajectory for the trolley for each traveling distance is not needed to be reset, which is easy to implement in practical applications. By converting the error tracking overhead crane dynamics to the objective system, we obtain the error tracking control law for arbitrary initial payload swing angles. Lyapunov techniques and LaSalle's invariance theorem are utilized to prove the convergence and stability of the closed-loop system. Simulation and experimental results are illustrated to validate the superior performance of the proposed error tracking control method.

  6. Distributed synchronization of networked drive-response systems: A nonlinear fixed-time protocol.

    PubMed

    Zhao, Wen; Liu, Gang; Ma, Xi; He, Bing; Dong, Yunfeng

    2017-11-01

    The distributed synchronization of networked drive-response systems is investigated in this paper. A novel nonlinear protocol is proposed to ensure that the tracking errors converge to zeros in a fixed-time. By comparison with previous synchronization methods, the present method considers more practical conditions and the synchronization time is not dependent of arbitrary initial conditions but can be offline pre-assign according to the task assignment. Finally, the feasibility and validity of the presented protocol have been illustrated by a numerical simulation. Copyright © 2017. Published by Elsevier Ltd.

  7. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Ketamine Effects on Memory Reconsolidation Favor a Learning Model of Delusions

    PubMed Central

    Gardner, Jennifer M.; Piggot, Jennifer S.; Turner, Danielle C.; Everitt, Jessica C.; Arana, Fernando Sergio; Morgan, Hannah L.; Milton, Amy L.; Lee, Jonathan L.; Aitken, Michael R. F.; Dickinson, Anthony; Everitt, Barry J.; Absalom, Anthony R.; Adapa, Ram; Subramanian, Naresh; Taylor, Jane R.; Krystal, John H.; Fletcher, Paul C.

    2013-01-01

    Delusions are the persistent and often bizarre beliefs that characterise psychosis. Previous studies have suggested that their emergence may be explained by disturbances in prediction error-dependent learning. Here we set up complementary studies in order to examine whether such a disturbance also modulates memory reconsolidation and hence explains their remarkable persistence. First, we quantified individual brain responses to prediction error in a causal learning task in 18 human subjects (8 female). Next, a placebo-controlled within-subjects study of the impact of ketamine was set up on the same individuals. We determined the influence of this NMDA receptor antagonist (previously shown to induce aberrant prediction error signal and lead to transient alterations in perception and belief) on the evolution of a fear memory over a 72 hour period: they initially underwent Pavlovian fear conditioning; 24 hours later, during ketamine or placebo administration, the conditioned stimulus (CS) was presented once, without reinforcement; memory strength was then tested again 24 hours later. Re-presentation of the CS under ketamine led to a stronger subsequent memory than under placebo. Moreover, the degree of strengthening correlated with individual vulnerability to ketamine's psychotogenic effects and with prediction error brain signal. This finding was partially replicated in an independent sample with an appetitive learning procedure (in 8 human subjects, 4 female). These results suggest a link between altered prediction error, memory strength and psychosis. They point to a core disruption that may explain not only the emergence of delusional beliefs but also their persistence. PMID:23776445

  9. Medical Errors Reduction Initiative

    DTIC Science & Technology

    2005-05-01

    working with great success to minimize error. 14. SUBJECT TERMS 15. NUMBER OF PAGES Medical Error, Patient Safety, Personal Data Terminal, Barcodes, 9...AD Award Number: W81XWH-04-1-0536 TITLE: Medical Errors Reduction Initiative PRINCIPAL INVESTIGATOR: Michael L. Mutter 1To CONTRACTING ORGANIZATION...The Valley Hospital Ridgewood, NJ 07450 REPORT DATE: May 2005 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and Materiel Command

  10. Monthly mean simulation experiments with a course-mesh global atmospheric model

    NASA Technical Reports Server (NTRS)

    Spar, J.; Klugman, R.; Lutz, R. J.; Notario, J. J.

    1978-01-01

    Substitution of observed monthly mean sea-surface temperatures (SSTs) as lower boundary conditions, in place of climatological SSTs, failed to improve the model simulations. While the impact of SST anomalies on the model output is greater at sea level than at upper levels the impact on the monthly mean simulations is not beneficial at any level. Shifts of one and two days in initialization time produced small, but non-trivial, changes in the model-generated monthly mean synoptic fields. No improvements in the mean simulations resulted from the use of either time-averaged initial data or re-initialization with time-averaged early model output. The noise level of the model, as determined from a multiple initial state perturbation experiment, was found to be generally low, but with a noisier response to initial state errors in high latitudes than the tropics.

  11. The Impact of Ocean Data Assimilation on Seasonal-to-Interannual Forecasts: A Case Study of the 2006 El Nino Event

    NASA Technical Reports Server (NTRS)

    Yang, Shu-Chih; Rienecker, Michele; Keppenne, Christian

    2010-01-01

    This study investigates the impact of four different ocean analyses on coupled forecasts of the 2006 El Nino event. Forecasts initialized in June 2006 using ocean analyses from an assimilation that uses flow-dependent background error covariances are compared with those using static error covariances that are not flow dependent. The flow-dependent error covariances reflect the error structures related to the background ENSO instability and are generated by the coupled breeding method. The ocean analyses used in this study result from the assimilation of temperature and salinity, with the salinity data available from Argo floats. Of the analyses, the one using information from the coupled bred vectors (BV) replicates the observed equatorial long wave propagation best and exhibits more warming features leading to the 2006 El Nino event. The forecasts initialized from the BV-based analysis agree best with the observations in terms of the growth of the warm anomaly through two warming phases. This better performance is related to the impact of the salinity analysis on the state evolution in the equatorial thermocline. The early warming is traced back to salinity differences in the upper ocean of the equatorial central Pacific, while the second warming, corresponding to the mature phase, is associated with the effect of the salinity assimilation on the depth of the thermocline in the western equatorial Pacific. The series of forecast experiments conducted here show that the structure of the salinity in the initial conditions is important to the forecasts of the extension of the warm pool and the evolution of the 2006 El Ni o event.

  12. Novel prescribed performance neural control of a flexible air-breathing hypersonic vehicle with unknown initial errors.

    PubMed

    Bu, Xiangwei; Wu, Xiaoyan; Zhu, Fujing; Huang, Jiaqi; Ma, Zhen; Zhang, Rui

    2015-11-01

    A novel prescribed performance neural controller with unknown initial errors is addressed for the longitudinal dynamic model of a flexible air-breathing hypersonic vehicle (FAHV) subject to parametric uncertainties. Different from traditional prescribed performance control (PPC) requiring that the initial errors have to be known accurately, this paper investigates the tracking control without accurate initial errors via exploiting a new performance function. A combined neural back-stepping and minimal learning parameter (MLP) technology is employed for exploring a prescribed performance controller that provides robust tracking of velocity and altitude reference trajectories. The highlight is that the transient performance of velocity and altitude tracking errors is satisfactory and the computational load of neural approximation is low. Finally, numerical simulation results from a nonlinear FAHV model demonstrate the efficacy of the proposed strategy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Pitch Guidance Optimization for the Orion Abort Flight Tests

    NASA Technical Reports Server (NTRS)

    Stillwater, Ryan Allanque

    2010-01-01

    The National Aeronautics and Space Administration created the Constellation program to develop the next generation of manned space vehicles and launch vehicles. The Orion abort system is initiated in the event of an unsafe condition during launch. The system has a controller gains schedule that can be tuned to reduce the attitude errors between the simulated Orion abort trajectories and the guidance trajectory. A program was created that uses the method of steepest descent to tune the pitch gains schedule by an automated procedure. The gains schedule optimization was applied to three potential abort scenarios; each scenario tested using the optimized gains schedule resulted in reduced attitude errors when compared to the Orion production gains schedule.

  14. A Comprehensive Quality Assurance Program for Personnel and Procedures in Radiation Oncology: Value of Voluntary Error Reporting and Checklists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalapurakal, John A., E-mail: j-kalapurakal@northwestern.edu; Zafirovski, Aleksandar; Smith, Jeffery

    Purpose: This report describes the value of a voluntary error reporting system and the impact of a series of quality assurance (QA) measures including checklists and timeouts on reported error rates in patients receiving radiation therapy. Methods and Materials: A voluntary error reporting system was instituted with the goal of recording errors, analyzing their clinical impact, and guiding the implementation of targeted QA measures. In response to errors committed in relation to treatment of the wrong patient, wrong treatment site, and wrong dose, a novel initiative involving the use of checklists and timeouts for all staff was implemented. The impactmore » of these and other QA initiatives was analyzed. Results: From 2001 to 2011, a total of 256 errors in 139 patients after 284,810 external radiation treatments (0.09% per treatment) were recorded in our voluntary error database. The incidence of errors related to patient/tumor site, treatment planning/data transfer, and patient setup/treatment delivery was 9%, 40.2%, and 50.8%, respectively. The compliance rate for the checklists and timeouts initiative was 97% (P<.001). These and other QA measures resulted in a significant reduction in many categories of errors. The introduction of checklists and timeouts has been successful in eliminating errors related to wrong patient, wrong site, and wrong dose. Conclusions: A comprehensive QA program that regularly monitors staff compliance together with a robust voluntary error reporting system can reduce or eliminate errors that could result in serious patient injury. We recommend the adoption of these relatively simple QA initiatives including the use of checklists and timeouts for all staff to improve the safety of patients undergoing radiation therapy in the modern era.« less

  15. Anomaly transform methods based on total energy and ocean heat content norms for generating ocean dynamic disturbances for ensemble climate forecasts

    NASA Astrophysics Data System (ADS)

    Romanova, Vanya; Hense, Andreas

    2017-08-01

    In our study we use the anomaly transform, a special case of ensemble transform method, in which a selected set of initial oceanic anomalies in space, time and variables are defined and orthogonalized. The resulting orthogonal perturbation patterns are designed such that they pick up typical balanced anomaly structures in space and time and between variables. The metric used to set up the eigen problem is taken either as the weighted total energy with its zonal, meridional kinetic and available potential energy terms having equal contributions, or the weighted ocean heat content in which a disturbance is applied only to the initial temperature fields. The choices of a reference state for defining the initial anomalies are such that either perturbations on seasonal timescales and or on interannual timescales are constructed. These project a-priori only the slow modes of the ocean physical processes, such that the disturbances grow mainly in the Western Boundary Currents, in the Antarctic Circumpolar Current and the El Nino Southern Oscillation regions. An additional set of initial conditions is designed to fit in a least square sense data from global ocean reanalysis. Applying the AT produced sets of disturbances to oceanic initial conditions initialized by observations of the MPIOM-ESM coupled model on T63L47/GR15 resolution, four ensemble and one hind-cast experiments were performed. The weighted total energy norm is used to monitor the amplitudes and rates of the fastest growing error modes. The results showed minor dependence of the instabilities or error growth on the selected metric but considerable change due to the magnitude of the scaling amplitudes of the perturbation patterns. In contrast to similar atmospheric applications, we find an energy conversion from kinetic to available potential energy, which suggests a different source of uncertainty generation in the ocean than in the atmosphere mainly associated with changes in the density field.

  16. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    NASA Astrophysics Data System (ADS)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  17. Application of Improved 5th-Cubature Kalman Filter in Initial Strapdown Inertial Navigation System Alignment for Large Misalignment Angles

    PubMed Central

    Wang, Wei; Chen, Xiyuan

    2018-01-01

    In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm. PMID:29473912

  18. A patient-initiated voluntary online survey of adverse medical events: the perspective of 696 injured patients and families

    PubMed Central

    Southwick, Frederick S; Cranley, Nicole M; Hallisy, Julia A

    2015-01-01

    Background Preventable medical errors continue to be a major cause of death in the USA and throughout the world. Many patients have written about their experiences on websites and in published books. Methods As patients and family members who have experienced medical harm, we have created a nationwide voluntary survey in order to more broadly and systematically capture the perspective of patients and patient families experiencing adverse medical events and have used quantitative and qualitative analysis to summarise the responses of 696 patients and their families. Results Harm was most commonly associated with diagnostic and therapeutic errors, followed by surgical or procedural complications, hospital-associated infections and medication errors, and our quantitative results match those of previous provider-initiated patient surveys. Qualitative analysis of 450 narratives revealed a lack of perceived provider and system accountability, deficient and disrespectful communication and a failure of providers to listen as major themes. The consequences of adverse events included death, post-traumatic stress, financial hardship and permanent disability. These conditions and consequences led to a loss of patients’ trust in both the health system and providers. Patients and family members offered suggestions for preventing future adverse events and emphasised the importance of shared decision-making. Conclusions This large voluntary survey of medical harm highlights the potential efficacy of patient-initiated surveys for providing meaningful feedback and for guiding improvements in patient care. PMID:26092166

  19. On the primary variable switching technique for simulating unsaturated-saturated flows

    NASA Astrophysics Data System (ADS)

    Diersch, H.-J. G.; Perrochet, P.

    Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.

  20. Consensus of satellite cluster flight using an energy-matching optimal control method

    NASA Astrophysics Data System (ADS)

    Luo, Jianjun; Zhou, Liang; Zhang, Bo

    2017-11-01

    This paper presents an optimal control method for consensus of satellite cluster flight under a kind of energy matching condition. Firstly, the relation between energy matching and satellite periodically bounded relative motion is analyzed, and the satellite energy matching principle is applied to configure the initial conditions. Then, period-delayed errors are adopted as state variables to establish the period-delayed errors dynamics models of a single satellite and the cluster. Next a novel satellite cluster feedback control protocol with coupling gain is designed, so that the satellite cluster periodically bounded relative motion consensus problem (period-delayed errors state consensus problem) is transformed to the stability of a set of matrices with the same low dimension. Based on the consensus region theory in the research of multi-agent system consensus issues, the coupling gain can be obtained to satisfy the requirement of consensus region and decouple the satellite cluster information topology and the feedback control gain matrix, which can be determined by Linear quadratic regulator (LQR) optimal method. This method can realize the consensus of satellite cluster period-delayed errors, leading to the consistency of semi-major axes (SMA) and the energy-matching of satellite cluster. Then satellites can emerge the global coordinative cluster behavior. Finally the feasibility and effectiveness of the present energy-matching optimal consensus for satellite cluster flight is verified through numerical simulations.

  1. Initial conditions for accurate N-body simulations of massive neutrino cosmologies

    NASA Astrophysics Data System (ADS)

    Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.

    2017-04-01

    The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.

  2. Localized bulging in an inflated cylindrical tube of arbitrary thickness - the effect of bending stiffness

    NASA Astrophysics Data System (ADS)

    Fu, Y. B.; Liu, J. L.; Francisco, G. S.

    2016-05-01

    We study localized bulging of a cylindrical hyperelastic tube of arbitrary thickness when it is subjected to the combined action of inflation and axial extension. It is shown that with the internal pressure P and resultant axial force F viewed as functions of the azimuthal stretch on the inner surface and the axial stretch, the bifurcation condition for the initiation of a localized bulge is that the Jacobian of the vector function (P , F) should vanish. This is established using the dynamical systems theory by first computing the eigenvalues of a certain eigenvalue problem governing incremental deformations, and then deriving the bifurcation condition explicitly. The bifurcation condition is valid for all loading conditions, and in the special case of fixed resultant axial force it gives the expected result that the initiation pressure for localized bulging is precisely the maximum pressure in uniform inflation. It is shown that even if localized bulging cannot take place when the axial force is fixed, it is still possible if the axial stretch is fixed instead. The explicit bifurcation condition also provides a means to quantify precisely the effect of bending stiffness on the initiation pressure. It is shown that the (approximate) membrane theory gives good predictions for the initiation pressure, with a relative error less than 5%, for thickness/radius ratios up to 0.67. A two-term asymptotic bifurcation condition for localized bulging that incorporates the effect of bending stiffness is proposed, and is shown to be capable of giving extremely accurate predictions for the initiation pressure for thickness/radius ratios up to as large as 1.2.

  3. The fate of memory: Reconsolidation and the case of Prediction Error.

    PubMed

    Fernández, Rodrigo S; Boccia, Mariano M; Pedreira, María E

    2016-09-01

    The ability to make predictions based on stored information is a general coding strategy. A Prediction-Error (PE) is a mismatch between expected and current events. It was proposed as the process by which memories are acquired. But, our memories like ourselves are subject to change. Thus, an acquired memory can become active and update its content or strength by a labilization-reconsolidation process. Within the reconsolidation framework, PE drives the updating of consolidated memories. Moreover, memory features, such as strength and age, are crucial boundary conditions that limit the initiation of the reconsolidation process. In order to disentangle these boundary conditions, we review the role of surprise, classical models of conditioning, and their neural correlates. Several forms of PE were found to be capable of inducing memory labilization-reconsolidation. Notably, many of the PE findings mirror those of memory-reconsolidation, suggesting a strong link between these signals and memory process. Altogether, the aim of the present work is to integrate a psychological and neuroscientific analysis of PE into a general framework for memory-reconsolidation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Infrequent identity mismatches are frequently undetected

    PubMed Central

    Goldinger, Stephen D.

    2014-01-01

    The ability to quickly and accurately match faces to photographs bears critically on many domains, from controlling purchase of age-restricted goods to law enforcement and airport security. Despite its pervasiveness and importance, research has shown that face matching is surprisingly error prone. The majority of face-matching research is conducted under idealized conditions (e.g., using photographs of individuals taken on the same day) and with equal proportions of match and mismatch trials, a rate that is likely not observed in everyday face matching. In four experiments, we presented observers with photographs of faces taken an average of 1.5 years apart and tested whether face-matching performance is affected by the prevalence of identity mismatches, comparing conditions of low (10 %) and high (50 %) mismatch prevalence. Like the low-prevalence effect in visual search, we observed inflated miss rates under low-prevalence conditions. This effect persisted when participants were allowed to correct their initial responses (Experiment 2), when they had to verify every decision with a certainty judgment (Experiment 3) and when they were permitted “second looks” at face pairs (Experiment 4). These results suggest that, under realistic viewing conditions, the low-prevalence effect in face matching is a large, persistent source of errors. PMID:24500751

  5. A systematic review of clinical pharmacist interventions in paediatric hospital patients.

    PubMed

    Drovandi, Aaron; Robertson, Kelvin; Tucker, Matthew; Robinson, Niechole; Perks, Stephen; Kairuz, Therése

    2018-06-19

    Clinical pharmacists provide beneficial services to adult patients, though their benefits for paediatric hospital patients are less defined. Five databases were searched using the MeSH terms 'clinical pharmacist', 'paediatric/paediatric', 'hospital', and 'intervention' for studies with paediatric patients conducted in hospital settings, and described pharmacist-initiated interventions, published between January 2000 and October 2017. The search strategy after full-text review identified 12 articles matching the eligibility criteria. Quality appraisal checklists from the Joanna Briggs Institute were used to appraise the eligible articles. Clinical pharmacist services had a positive impact on paediatric patient care. Medication errors intercepted by pharmacists included over- and under-dosing, missed doses, medication history gaps, allergies, and near-misses. Interventions to address these errors were positively received, and implemented by physicians, with an average acceptance rate of over 95%. Clinical pharmacist-initiated education resulted in improved medication understanding and adherence, improved patient satisfaction, and control of chronic medical conditions. This review found that clinical pharmacists in paediatric wards may reduce drug-related problems and improve patient outcomes. The benefits of pharmacist involvement appear greatest when directly involved in ward rounds, due to being able to more rapidly identify medication errors during the prescribing phase, and provide real-time advice and recommendations to prescribers. What is Known: • Complex paediatric conditions can require multiple pharmaceutical treatments, utilised in a safe manner to ensure good patient outcomes • The benefits of pharmacist interventions when using these treatments are well-documented in adult patients, though less so in paediatric patients What is New: • Pharmacists are adept at identifying and managing medication errors for paediatric patients, including incorrect doses, missed doses, and gaps in medication history • Interventions recommended by pharmacists are generally well-accepted by prescribing physicians, especially when recommendations can be made during the prescribing phase of treatment.

  6. The Importance of Semi-Major Axis Knowledge in the Determination of Near-Circular Orbits

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Schiesser, Emil R.

    1998-01-01

    Modem orbit determination has mostly been accomplished using Cartesian coordinates. This usage has carried over in recent years to the use of GPS for satellite orbit determination. The unprecedented positioning accuracy of GPS has tended to focus attention more on the system's capability to locate the spacecraft's location at a particular epoch than on its accuracy in determination of the orbit, per se. As is well-known, the latter depends on a coordinated knowledge of position, velocity, and the correlation between their errors. Failure to determine a properly coordinated position/velocity state vector at a given epoch can lead to an epoch state that does not propagate well, and/or may not be usable for the execution of orbit adjustment maneuvers. For the quite common case of near-circular orbits, the degree to which position and velocity estimates are properly coordinated is largely captured by the error in semi-major axis (SMA) they jointly produce. Figure 1 depicts the relationships among radius error, speed error, and their correlation which exist for a typical low altitude Earth orbit. Two familiar consequences are the relationship Figure 1 shows are the following: (1) downrange position error grows at the per orbit rate of 3(pi) times the SMA error; (2) a velocity change imparted to the orbit will have an error of (pi) divided by the orbit period times the SMA error. A less familiar consequence occurs in the problem of initializing the covariance matrix for a sequential orbit determination filter. An initial covariance consistent with orbital dynamics should be used if the covariance is to propagate well. Properly accounting for the SMA error of the initial state in the construction of the initial covariance accomplishes half of this objective, by specifying the partition of the covariance corresponding to down-track position and radial velocity errors. The remainder of the in-plane covariance partition may be specified in terms of the flight path angle error of the initial state. Figure 2 illustrates the effect of properly and not properly initializing a covariance. This figure was produced by propagating the covariance shown on the plot, without process noise, in a circular low Earth orbit whose period is 5828.5 seconds. The upper subplot, in which the proper relationships among position, velocity, and their correlation has been used, shows overall error growth, in terms of the standard deviations of the inertial position coordinates, of about half of the lower subplot, whose initial covariance was based on other considerations.

  7. Stochastic Representation of Chaos Using Terminal Attractors

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2006-01-01

    A nonlinear version of the Liouville equation based on terminal attractors is part of a mathematical formalism for describing postinstability motions of dynamical systems characterized by exponential divergences of trajectories leading to chaos (including turbulence as a form of chaos). The formalism can be applied to both conservative systems (e.g., multibody systems in celestial mechanics) and dissipative systems (e.g., viscous fluids). The development of the present formalism was undertaken in an effort to remove positive Lyapunov exponents. The means chosen to accomplish this is coupling of the governing dynamical equations with the corresponding Liouville equation that describes the evolution of the flow of error probability. The underlying idea is to suppress the divergences of different trajectories that correspond to different initial conditions, without affecting a target trajectory, which is one that starts with prescribed initial conditions.

  8. Improving patient safety through quality assurance.

    PubMed

    Raab, Stephen S

    2006-05-01

    Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.

  9. Local Sensitivity of Predicted CO 2 Injectivity and Plume Extent to Model Inputs for the FutureGen 2.0 site

    DOE PAGES

    Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...

    2014-12-31

    Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less

  10. Localized strain measurements of the intervertebral disc annulus during biaxial tensile testing.

    PubMed

    Karakolis, Thomas; Callaghan, Jack P

    2015-01-01

    Both inter-lamellar and intra-lamellar failures of the annulus have been described as potential modes of disc herniation. Attempts to characterize initial lamellar failure of the annulus have involved tensile testing of small tissue samples. The purpose of this study was to evaluate a method of measuring local surface strains through image analysis of a tensile test conducted on an isolated sample of annular tissue in order to enhance future studies of intervertebral disc failure. An annulus tissue sample was biaxial strained to 10%. High-resolution images captured the tissue surface throughout testing. Three test conditions were evaluated: submerged, non-submerged and marker. Surface strains were calculated for the two non-marker conditions based on motion of virtual tracking points. Tracking algorithm parameters (grid resolution and template size) were varied to determine the effect on estimated strains. Accuracy of point tracking was assessed through a comparison of the non-marker conditions to a condition involving markers placed on tissue surface. Grid resolution had a larger effect on local strain than template size. Average local strain error ranged from 3% to 9.25% and 0.1% to 2.0%, for the non-submerged and submerged conditions, respectively. Local strain estimation has a relatively high potential for error. Submerging the tissue provided superior strain estimates.

  11. An electrophysiological signal that precisely tracks the emergence of error awareness

    PubMed Central

    Murphy, Peter R.; Robertson, Ian H.; Allen, Darren; Hester, Robert; O'Connell, Redmond G.

    2012-01-01

    Recent electrophysiological research has sought to elucidate the neural mechanisms necessary for the conscious awareness of action errors. Much of this work has focused on the error positivity (Pe), a neural signal that is specifically elicited by errors that have been consciously perceived. While awareness appears to be an essential prerequisite for eliciting the Pe, the precise functional role of this component has not been identified. Twenty-nine participants performed a novel variant of the Go/No-go Error Awareness Task (EAT) in which awareness of commission errors was indicated via a separate speeded manual response. Independent component analysis (ICA) was used to isolate the Pe from other stimulus- and response-evoked signals. Single-trial analysis revealed that Pe peak latency was highly correlated with the latency at which awareness was indicated. Furthermore, the Pe was more closely related to the timing of awareness than it was to the initial erroneous response. This finding was confirmed in a separate study which derived IC weights from a control condition in which no indication of awareness was required, thus ruling out motor confounds. A receiver-operating-characteristic (ROC) curve analysis showed that the Pe could reliably predict whether an error would be consciously perceived up to 400 ms before the average awareness response. Finally, Pe latency and amplitude were found to be significantly correlated with overall error awareness levels between subjects. Our data show for the first time that the temporal dynamics of the Pe trace the emergence of error awareness. These findings have important implications for interpreting the results of clinical EEG studies of error processing. PMID:22470332

  12. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  13. Local error estimates for discontinuous solutions of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1989-01-01

    Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

  14. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  15. U-500 Insulin Safety Concerns Mount; Improved Labeling Needed for Camphor Product; Cardizem-Cardene Mix-up; Initiative to Eliminate Tubing Misconnections.

    PubMed

    Cohen, Michael R; Smetzer, Judy L

    2014-02-01

    These medication errors have occurred in health care facilities at least once. They will happen again-perhaps where you work. Through education and alertness of personnel and procedural safeguards, they can be avoided. You should consider publishing accounts of errors in your newsletters and/or presenting them at your inservice training programs. Your assistance is required to continue this feature. The reports described here were received through the Institute for Safe Medication Practices (ISMP) Medication Errors Reporting Program. Any reports published by ISMP will be anonymous. Comments are also invited; the writers' names will be published if desired. ISMP may be contacted at the address shown below. Errors, close calls, or hazardous conditions may be reported directly to ISMP through the ISMP Web site (www.ismp.org), by calling 800-FAIL-SAFE, or via e-mail at ismpinfo@ismp.org. ISMP guarantees the confidentiality and security of the information received and respects reporters' wishes as to the level of detail included in publications.

  16. Impact of Standardized Communication Techniques on Errors during Simulated Neonatal Resuscitation.

    PubMed

    Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P

    2016-03-01

    Current patterns of communication in high-risk clinical situations, such as resuscitation, are imprecise and prone to error. We hypothesized that the use of standardized communication techniques would decrease the errors committed by resuscitation teams during neonatal resuscitation. In a prospective, single-blinded, matched pairs design with block randomization, 13 subjects performed as a lead resuscitator in two simulated complex neonatal resuscitations. Two nurses assisted each subject during the simulated resuscitation scenarios. In one scenario, the nurses used nonstandard communication; in the other, they used standardized communication techniques. The performance of the subjects was scored to determine errors committed (defined relative to the Neonatal Resuscitation Program algorithm), time to initiation of positive pressure ventilation (PPV), and time to initiation of chest compressions (CC). In scenarios in which subjects were exposed to standardized communication techniques, there was a trend toward decreased error rate, time to initiation of PPV, and time to initiation of CC. While not statistically significant, there was a 1.7-second improvement in time to initiation of PPV and a 7.9-second improvement in time to initiation of CC. Should these improvements in human performance be replicated in the care of real newborn infants, they could improve patient outcomes and enhance patient safety. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Improved ensemble-mean forecasting of ENSO events by a zero-mean stochastic error model of an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zheng, Fei; Zhu, Jiang

    2017-04-01

    How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.

  18. The FIM-iHYCOM Model in SubX: Evaluation of Subseasonal Errors and Variability

    NASA Astrophysics Data System (ADS)

    Green, B.; Sun, S.; Benjamin, S.; Grell, G. A.; Bleck, R.

    2017-12-01

    NOAA/ESRL/GSD has produced both real-time and retrospective forecasts for the Subseasonal Experiment (SubX) using the FIM-iHYCOM model. FIM-iHYCOM couples the atmospheric Flow-following finite volume Icosahedral Model (FIM) to an icosahedral-grid version of the Hybrid Coordinate Ocean Model (HYCOM). This coupled model is unique in terms of its grid structure: in the horizontal, the icosahedral meshes are perfectly matched for FIM and iHYCOM, eliminating the need for a flux interpolator; in the vertical, both models use adaptive arbitrary Lagrangian-Eulerian hybrid coordinates. For SubX, FIM-iHYCOM initializes four time-lagged ensemble members around each Wednesday, which are integrated forward to provide 32-day forecasts. While it has already been shown that this model has similar predictive skill as NOAA's operational CFSv2 in terms of the RMM index, FIM-iHYCOM is still fairly new and thus its overall performance needs to be thoroughly evaluated. To that end, this study examines model errors as a function of forecast lead week (1-4) - i.e., model drift - for key variables including 2-m temperature, precipitation, and SST. Errors are evaluated against two reanalysis products: CFSR, from which FIM-iHYCOM initial conditions are derived, and the quasi-independent ERA-Interim. The week 4 error magnitudes are similar between FIM-iHYCOM and CFSv2, albeit with different spatial distributions. Also, intraseasonal variability as simulated in these two models will be compared with reanalyses. The impact of hindcast frequency (4 times per week, once per week, or once per day) on the model climatology is also examined to determine the implications for systematic error correction in FIM-iHYCOM.

  19. Psychophysical evaluation of three-dimensional auditory displays

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.

    1991-01-01

    Work during this reporting period included the completion of our research on the use of principal components analysis (PCA) to model the acoustical head related transfer functions (HRTFs) that are used to synthesize virtual sources for three dimensional auditory displays. In addition, a series of studies was initiated on the perceptual errors made by listeners when localizing free-field and virtual sources. Previous research has revealed that under certain conditions these perceptual errors, often called 'confusions' or 'reversals', are both large and frequent, thus seriously comprising the utility of a 3-D virtual auditory display. The long-range goal of our work in this area is to elucidate the sources of the confusions and to develop signal-processing strategies to reduce or eliminate them.

  20. A Generalization of the Karush-Kuhn-Tucker Theorem for Approximate Solutions of Mathematical Programming Problems Based on Quadratic Approximation

    NASA Astrophysics Data System (ADS)

    Voloshinov, V. V.

    2018-03-01

    In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.

  1. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    ERIC Educational Resources Information Center

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  2. Impacts of motivational valence on the error-related negativity elicited by full and partial errors.

    PubMed

    Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki

    2016-02-01

    Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Revisiting the false alarm in the 2014 El Niño prediction

    NASA Astrophysics Data System (ADS)

    Shin, C. S.; Huang, B.

    2016-12-01

    In early 2014, most dynamic forecast models predicted a developing strong El Niño in the following winter. However, this forecast turned out to be a representative case of the false alarms since 2000. In this study, a set of CFSv2 ensemble seasonal reforecast is conducted to examine some possible causes of the unrealistic El Niño prediction in 2014. Zooming in on the NINO3.4 index, the ensemble-mean reforecast initialized in April 2014 predicted a very strong El Niño as the 1997-98 one with most ensemble members warmer than the observations. In contrast, the ensemble-mean reforecast initialized in January (July) 2014 predicted a slower growth (a decline) of the NINO3.4 index for 12-month lead (from November to the spring in 2015), with the spreads of the ensemble members enveloping the observations. Since the observed SST anomalies in equatorial eastern Pacific changed its polarity in late March from the coldest SST anomalies in February accompanied by strong easterly wind to warmer SST in mid April, the atmospheric and oceanic instantaneous initial states in early April 2014 may misrepresent these intra-seasonal variations, possibly resulting in warm bias in equatorial Pacific even at 0-month lead. Our experiments show that colder ocean surface initial conditions in tropical eastern Pacific tend to hinder developing warm SST anomalies in equatorial eastern Pacific and weaken the Bjerknes-type air-sea feedback in the summer of 2014, which reduce excessive westerly wind (warm SST anomalies) in equatorial western Pacific (near the Dateline) and decrease the air-sea feedback. As a result, the predicted amplitude of NINO3.4 at the peak phase is comparable to the observed one, suggesting that the initial condition errors are partially responsible for the false alarm in the 2014 El Niño prediction issued in the spring. Nonetheless, the initial condition errors could not account for easterly wind burst observed in mid June associated with enhanced extratropical anti-cyclonic atmospheric circulation anomalies in the Southern Hemisphere, which is regarded as another major factor to stall the El Niño occurrence in 2014. What drives this anomalous atmospheric forcing in mid June and how much it contributes to a more realistic prediction of the 2014 El Niño will also be discussed.

  4. Training to estimate blood glucose and to form associations with initial hunger

    PubMed Central

    Ciampolini, Mario; Bianchi, Riccardo

    2006-01-01

    Background The will to eat is a decision associated with conditioned responses and with unconditioned body sensations that reflect changes in metabolic biomarkers. Here, we investigate whether this decision can be delayed until blood glucose is allowed to fall to low levels, when presumably feeding behavior is mostly unconditioned. Following such an eating pattern might avoid some of the metabolic risk factors that are associated with high glycemia. Results In this 7-week study, patients were trained to estimate their blood glucose at meal times by associating feelings of hunger with glycemic levels determined by standard blood glucose monitors and to eat only when glycemia was < 85 mg/dL. At the end of the 7-week training period, estimated and measured glycemic values were found to be linearly correlated in the trained group (r = 0.82; p = 0.0001) but not in the control (untrained) group (r = 0.10; p = 0.40). Fewer subjects in the trained group were hungry than those in the control group (p = 0.001). The 18 hungry subjects of the trained group had significantly lower glucose levels (80.1 ± 6.3 mg/dL) than the 42 hungry control subjects (89.2 ± 10.2 mg/dL; p = 0.01). Moreover, the trained hungry subjects estimated their glycemia (78.1 ± 6.7 mg/dL; estimation error: 3.2 ± 2.4% of the measured glycemia) more accurately than the control hungry subjects (75.9 ± 9.8 mg/dL; estimation error: 16.7 ± 11.0%; p = 0.0001). Also the estimation error of the entire trained group (4.7 ± 3.6%) was significantly lower than that of the control group (17.1 ± 11.5%; p = 0.0001). A value of glycemia at initial feelings of hunger was provisionally identified as 87 mg/dL. Below this level, estimation showed lower error in both trained (p = 0.04) and control subjects (p = 0.001). Conclusion Subjects could be trained to accurately estimate their blood glucose and to recognize their sensations of initial hunger at low glucose concentrations. These results suggest that it is possible to make a behavioral distinction between unconditioned and conditioned hunger, and to achieve a cognitive will to eat by training. PMID:17156448

  5. Improving Global Reanalyses and Short Range Forecast Using TRMM and SSM/I-Derived Precipitation and Moisture Observations

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; deSilva, Arlindo M.

    2000-01-01

    Global reanalyses currently contain significant errors in the primary fields of the hydrological cycle such as precipitation, evaporation, moisture, and the related cloud fields, especially in the tropics. The Data Assimilation Office (DAO) at the NASA Goddard Space Flight Center has been exploring the use of tropical rainfall and total precipitable water (TPW) observations from the TRMM Microwave Imager (TMI) and the Special Sensor Microwave/ Imager (SSM/I) instruments to improve short-range forecast and reanalyses. We describe a "1+1"D procedure for assimilating 6-hr averaged rainfall and TPW in the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The algorithm is based on a 6-hr time integration of a column version of the GEOS DAS, hence the "1+1"D designation. The scheme minimizes the least-square differences between the observed TPW and rain rates and those produced by the column model over the 6-hr analysis window. This 1+lD scheme, in its generalization to four dimensions, is related to the standard 4D variational assimilation but uses analysis increments instead of the initial condition as the control variable. Results show that assimilating the TMI and SSM/I rainfall and TPW observations improves not only the precipitation and moisture fields but also key climate parameters such as clouds, the radiation, the upper-tropospheric moisture, and the large-scale circulation in the tropics. In particular, assimilating these data reduce the state-dependent systematic errors in the assimilated products. The improved analysis also provides better initial conditions for short-range forecasts, but the improvements in forecast are less than improvements in the time-averaged assimilation fields, indicating that using these data types is effective in correcting biases and other errors of the forecast model in data assimilation.

  6. Improving Global Reanalyses and Short-Range Forecast Using TRMM and SSM/I-Derived Precipitation and Moisture Observations

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.

    1999-01-01

    Global reanalyses currently contain significant errors in the primary fields of the hydrological cycle such as precipitation, evaporation, moisture, and the related cloud fields, especially in the tropics. The Data Assimilation Office (DAO) at the NASA Goddard Space Flight Center has been exploring the use of tropical rainfall and total precipitable water (TPW) observations from the TRMM Microwave Imager (TMI) and the Special Sensor Microwave/ Imager (SSM/I) instruments to improve short-range forecast and reanalyses. We describe a 1+1D procedure for assimilating 6-hr averaged rainfall and TPW in the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The algorithm is based on a 6-hr time integration of a column version of the GEOS DAS, hence the 1+1D designation. The scheme minimizes the least-square differences between the observed TPW and rain rates and those produced by the column model over the 6-hr analysis window. This 1+1D scheme, in its generalization to four dimensions, is related to the standard 4D variational assimilation but uses analysis increments instead of the initial condition as the control variable. Results show that assimilating the TMI and SSW rainfall and TPW observations improves not only the precipitation and moisture fields but also key climate parameters such as clouds, the radiation, the upper-tropospheric moisture, and the large-scale circulation in the tropics. In particular, assimilating these data reduce the state-dependent systematic errors in the assimilated products. The improved analysis also provides better initial conditions for short-range forecasts, but the improvements in forecast are less than improvements in the time-averaged assimilation fields, indicating that using these data types is effective in correcting biases and other errors of the forecast model in data assimilation.

  7. Local synchronization of chaotic neural networks with sampled-data and saturating actuators.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2014-12-01

    This paper investigates the problem of local synchronization of chaotic neural networks with sampled-data and actuator saturation. A new time-dependent Lyapunov functional is proposed for the synchronization error systems. The advantage of the constructed Lyapunov functional lies in the fact that it is positive definite at sampling times but not necessarily between sampling times, and makes full use of the available information about the actual sampling pattern. A local stability condition of the synchronization error systems is derived, based on which a sampled-data controller with respect to the actuator saturation is designed to ensure that the master neural networks and slave neural networks are locally asymptotically synchronous. Two optimization problems are provided to compute the desired sampled-data controller with the aim of enlarging the set of admissible initial conditions or the admissible sampling upper bound ensuring the local synchronization of the considered chaotic neural networks. A numerical example is used to demonstrate the effectiveness of the proposed design technique.

  8. The interaction of the flux errors and transport errors in modeled atmospheric carbon dioxide concentrations

    NASA Astrophysics Data System (ADS)

    Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.

    2017-12-01

    Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter is mainly caused by the large-scale transport associated with the jet stream that carries the negative biogenic CO2 signals to the northeastern coast. We apply comprehensive statistics to eliminate outliers. We generate a set of flux perturbations based on pre-calibrated flux ensemble members and apply them to the simulations.

  9. The Madden-Julian Oscillation and its Impact on Northern Hemisphere Weather Predictability during Wintertime

    NASA Technical Reports Server (NTRS)

    Jones, Charles; Waliser, Duane E.; Lau, K. M.; Stern, W.

    2003-01-01

    The Madden-Julian Oscillation (MJO) is known as the dominant mode of tropical intraseasonal variability and has an important role in the coupled-atmosphere system. This study used twin numerical model experiments to investigate the influence of the MJO activity on weather predictability in the midlatitudes of the Northern Hemisphere during boreal winter. The National Aeronautics and Space Administration (NASA) Goddard laboratory for the Atmospheres (GLA) general circulation model was first used in a 10-yr simulation with fixed climatological SSTs to generate a validation data set as well as to select initial conditions for active MJO periods and Null cases. Two perturbation numerical experiments were performed for the 75 cases selected [(4 MJO phases + Null phase) _ 15 initial conditions in each]. For each alternative initial condition, the model was integrated for 90 days. Mean anomaly correlations in the midlatitudes of the Northern Hemisphere (2O deg N_60 deg.N) and standardized root-mean-square errors were computed to validate forecasts and control run. The analyses of 500-hPa geopotential height, 200-hPa Streamfunction and 850-hPa zonal wind component systematically show larger predictability during periods of active MJO as opposed to quiescent episodes of the oscillation.

  10. Effects of ocean initial perturbation on developing phase of ENSO in a coupled seasonal prediction model

    NASA Astrophysics Data System (ADS)

    Lee, Hyun-Chul; Kumar, Arun; Wang, Wanqiu

    2018-03-01

    Coupled prediction systems for seasonal and inter-annual variability in the tropical Pacific are initialized from ocean analyses. In ocean initial states, small scale perturbations are inevitably smoothed or distorted by the observational limits and data assimilation procedures, which tends to induce potential ocean initial errors for the El Nino-Southern Oscillation (ENSO) prediction. Here, the evolution and effects of ocean initial errors from the small scale perturbation on the developing phase of ENSO are investigated by an ensemble of coupled model predictions. Results show that the ocean initial errors at the thermocline in the western tropical Pacific grow rapidly to project on the first mode of equatorial Kelvin wave and propagate to the east along the thermocline. In boreal spring when the surface buoyancy flux weakens in the eastern tropical Pacific, the subsurface errors influence sea surface temperature variability and would account for the seasonal dependence of prediction skill in the NINO3 region. It is concluded that the ENSO prediction in the eastern tropical Pacific after boreal spring can be improved by increasing the observational accuracy of subsurface ocean initial states in the western tropical Pacific.

  11. Initial conditions and ENSO prediction using a coupled ocean-atmosphere model

    NASA Astrophysics Data System (ADS)

    Larow, T. E.; Krishnamurti, T. N.

    1998-01-01

    A coupled ocean-atmosphere initialization scheme using Newtonian relaxation has been developed for the Florida State University coupled ocean-atmosphere global general circulation model. The initialization scheme is used to initialize the coupled model for seasonal forecasting the boreal summers of 1987 and 1988. The atmosphere model is a modified version of the Florida State University global spectral model, resolution T-42. The ocean general circulation model consists of a slightly modified version of the Hamburg's climate group model described in Latif (1987) and Latif et al. (1993). The coupling is synchronous with information exchanged every two model hours. Using ECMWF atmospheric daily analysis and observed monthly mean SSTs, two, 1-year, time-dependent, Newtonian relaxation were performed using the coupled model prior to conducting the seasonal forecasts. The coupled initializations were conducted from 1 June 1986 to 1 June 1987 and from 1 June 1987 to 1 June 1988. Newtonian relaxation was applied to the prognostic atmospheric vorticity, divergence, temperature and dew point depression equations. In the ocean model the relaxation was applied to the surface temperature. Two, 10-member ensemble integrations were conducted to examine the impact of the coupled initialization on the seasonal forecasts. The initial conditions used for the ensembles are the ocean's final state after the initialization and the atmospheric initial conditions are ECMWF analysis. Examination of the SST root mean square error and anomaly correlations between observed and forecasted SSTs in the Niño-3 and Niño-4 regions for the 2 seasonal forecasts, show closer agreement between the initialized forecast than two, 10-member non-initialized ensemble forecasts. The main conclusion here is that a single forecast with the coupled initialization outperforms, in SST anomaly prediction, against each of the control forecasts (members of the ensemble) which do not include such an initialization, indicating possible importance for the inclusion of the atmosphere during the coupled initialization.

  12. Human operator response to error-likely situations in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1988-01-01

    The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.

  13. Performance monitoring and error significance in patients with obsessive-compulsive disorder.

    PubMed

    Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert

    2010-05-01

    Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.

  14. Independence of Movement Preparation and Movement Initiation.

    PubMed

    Haith, Adrian M; Pakpoor, Jina; Krakauer, John W

    2016-03-09

    Initiating a movement in response to a visual stimulus takes significantly longer than might be expected on the basis of neural transmission delays, but it is unclear why. In a visually guided reaching task, we forced human participants to move at lower-than-normal reaction times to test whether normal reaction times are strictly necessary for accurate movement. We found that participants were, in fact, capable of moving accurately ∼80 ms earlier than their reaction times would suggest. Reaction times thus include a seemingly unnecessary delay that accounts for approximately one-third of their duration. Close examination of participants' behavior in conventional reaction-time conditions revealed that they generated occasional, spontaneous errors in trials in which their reaction time was unusually short. The pattern of these errors could be well accounted for by a simple model in which the timing of movement initiation is independent of the timing of movement preparation. This independence provides an explanation for why reaction times are usually so sluggish: delaying the mean time of movement initiation relative to preparation reduces the risk that a movement will be initiated before it has been appropriately prepared. Our results suggest that preparation and initiation of movement are mechanistically independent and may have a distinct neural basis. The results also demonstrate that, even in strongly stimulus-driven tasks, presentation of a stimulus does not directly trigger a movement. Rather, the stimulus appears to trigger an internal decision whether to make a movement, reflecting a volitional rather than reactive mode of control. Copyright © 2016 the authors 0270-6474/16/363007-10$15.00/0.

  15. Development of extended WRF variational data assimilation system (WRFDA) for WRF non-hydrostatic mesoscale model

    NASA Astrophysics Data System (ADS)

    Pattanayak, Sujata; Mohanty, U. C.

    2018-06-01

    The paper intends to present the development of the extended weather research forecasting data assimilation (WRFDA) system in the framework of the non-hydrostatic mesoscale model core of weather research forecasting system (WRF-NMM), as an imperative aspect of numerical modeling studies. Though originally the WRFDA provides improved initial conditions for advanced research WRF, we have successfully developed a unified WRFDA utility that can be used by the WRF-NMM core, as well. After critical evaluation, it has been strategized to develop a code to merge WRFDA framework and WRF-NMM output. In this paper, we have provided a few selected implementations and initial results through single observation test, and background error statistics like eigenvalues, eigenvector and length scale among others, which showcase the successful development of extended WRFDA code for WRF-NMM model. Furthermore, the extended WRFDA system is applied for the forecast of three severe cyclonic storms: Nargis (27 April-3 May 2008), Aila (23-26 May 2009) and Jal (4-8 November 2010) formed over the Bay of Bengal. Model results are compared and contrasted within the analysis fields and later on with high-resolution model forecasts. The mean initial position error is reduced by 33% with WRFDA as compared to GFS analysis. The vector displacement errors in track forecast are reduced by 33, 31, 30 and 20% to 24, 48, 72 and 96 hr forecasts respectively, in data assimilation experiments as compared to control run. The model diagnostics indicates successful implementation of WRFDA within the WRF-NMM system.

  16. A patient-initiated voluntary online survey of adverse medical events: the perspective of 696 injured patients and families.

    PubMed

    Southwick, Frederick S; Cranley, Nicole M; Hallisy, Julia A

    2015-10-01

    Preventable medical errors continue to be a major cause of death in the USA and throughout the world. Many patients have written about their experiences on websites and in published books. As patients and family members who have experienced medical harm, we have created a nationwide voluntary survey in order to more broadly and systematically capture the perspective of patients and patient families experiencing adverse medical events and have used quantitative and qualitative analysis to summarise the responses of 696 patients and their families. Harm was most commonly associated with diagnostic and therapeutic errors, followed by surgical or procedural complications, hospital-associated infections and medication errors, and our quantitative results match those of previous provider-initiated patient surveys. Qualitative analysis of 450 narratives revealed a lack of perceived provider and system accountability, deficient and disrespectful communication and a failure of providers to listen as major themes. The consequences of adverse events included death, post-traumatic stress, financial hardship and permanent disability. These conditions and consequences led to a loss of patients' trust in both the health system and providers. Patients and family members offered suggestions for preventing future adverse events and emphasised the importance of shared decision-making. This large voluntary survey of medical harm highlights the potential efficacy of patient-initiated surveys for providing meaningful feedback and for guiding improvements in patient care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. A comparison of endoscopic localization error rate between operating surgeons and referring endoscopists in colorectal cancer.

    PubMed

    Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A

    2017-03-01

    Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.

  18. Full Field and Anomaly Initialisation using a low order climate model: a comparison, and proposals for advanced formulations

    NASA Astrophysics Data System (ADS)

    Weber, Robin; Carrassi, Alberto; Guemas, Virginie; Doblas-Reyes, Francisco; Volpi, Danila

    2014-05-01

    Full Field (FFI) and Anomaly Initialisation (AI) are two schemes used to initialise seasonal-to-decadal (s2d) prediction. FFI initialises the model on the best estimate of the actual climate state and minimises the initial error. However, due to inevitable model deficiencies, the trajectories drift away from the observations towards the model's own attractor, inducing a bias in the forecast. AI has been devised to tackle the impact of drift through the addition of this bias onto the observations, in the hope of gaining an initial state closer to the model attractor. Its goal is to forecast climate anomalies. The large variety of experimental setups, global coupled models, and observational networks adopted world-wide have led to varying results with regards to the relative performance of AI and FFI. Our research is firstly motivated in a comparison of these two initialisation approaches under varying circumstances of observational errors, observational distributions, and model errors. We also propose and compare two advanced schemes for s2d prediction. Least Square Initialisation (LSI) intends to propagate observational information of partially initialized systems to the whole model domain, based on standard practices in data assimilation and using the covariance of the model anomalies. Exploring the Parameters Uncertainty (EPU) is an online drift correction technique applied during the forecast run after initialisation. It is designed to estimate, and subtract, the bias in the forecast related to parametric error. Experiments are carried out using an idealized coupled dynamics in order to facilitate better control and robust statistical inference. Results show that an improvement of FFI will necessitate refinements in the observations, whereas improvements in AI are subject to model advances. A successful approximation of the model attractor using AI is guaranteed only when the differences between model and nature probability distribution functions (PDFs) are limited to the first order. Significant higher order differences can lead to an initial conditions distribution for AI that is less representative of the model PDF and lead to a degradation of the initalisation skill. Finally, both ad- vanced schemes lead to significantly improved skill scores, encouraging their implementation for models of higher complexity.

  19. Experimental measurement of the orbital paths of particles sedimenting within a rotating viscous fluid as influenced by gravity

    NASA Technical Reports Server (NTRS)

    Wolf, David A.; Schwarz, Ray P.

    1992-01-01

    Measurements were taken of the path of a simulated typical tissue segment or 'particle' within a rotating fluid as a function of gravitational strength, fluid rotation rate, particle sedimentation rate, and particle initial position. Parameters were examined within the useful range for tissue culture in the NASA rotating wall culture vessels. The particle moves along a nearly circular path through the fluid (as observed from the rotating reference frame of the fluid) at the same speed as its linear terminal sedimentation speed for the external gravitational field. This gravitationally induced motion causes an increasing deviation of the particle from its original position within the fluid for a decreased rotational rate, for a more rapidly sedimenting particle, and for an increased gravitational strength. Under low gravity conditions (less than 0.1 G), the particle's motion through the fluid and its deviation from its original position become negligible. Under unit gravity conditions, large distortions (greater than 0.25 inch) occur even for particles of slow sedimentation rate (less than 1.0 cm/sec). The particle's motion is nearly independent of the particle's initial position. Comparison with mathematically predicted particle paths show that a significant error in the mathematically predicted path occurs for large particle deviations. This results from a geometric approximation and numerically accumulating error in the mathematical technique.

  20. [Evaluation of the influence of humidity and temperature on the drug stability by initial average rate experiment].

    PubMed

    He, Ning; Sun, Hechun; Dai, Miaomiao

    2014-05-01

    To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.

  1. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  2. Textbook Multigrid Efficiency for Leading Edge Stagnation

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Mineck, Raymond E.

    2004-01-01

    A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading-edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (FAS) cycle per grid. Asymptotic convergence rates of the FAS cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.

  3. Textbook Multigrid Efficiency for Leading Edge Stagnation

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Mineck, Raymond E.

    2004-01-01

    A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading- edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (F.4S) cycle per grid. Asymptotic convergence rates of the F.4S cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.

  4. Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation

    NASA Astrophysics Data System (ADS)

    Wang, Linjun; Han, Xu; Wei, Zhouchao

    The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.

  5. Advanced error diagnostics of the CMAQ and Chimere modelling systems within the AQMEII3 model evaluation framework

    NASA Astrophysics Data System (ADS)

    Solazzo, Efisio; Hogrefe, Christian; Colette, Augustin; Garcia-Vivanco, Marta; Galmarini, Stefano

    2017-09-01

    The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance) of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study), allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL) dynamics is pivotal to both models. In particular, (i) the fluctuations slower than ˜ 1.5 days account for 70-85 % of the mean square error of the full (undecomposed) ozone time series; (ii) a recursive, systematic error with daily periodicity is detected, responsible for 10-20 % of the quadratic total error; (iii) errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network-average ozone observations in summer in both Europe and North America); (iv) the CMAQ ozone error has a weak/negligible dependence on the errors in NO2, while the error in NO2 significantly impacts the ozone error produced by Chimere; (v) the response of the models to variations of anthropogenic emissions and boundary conditions show a pronounced spatial heterogeneity, while the seasonal variability of the response is found to be less marked. Only during the winter season does the zeroing of boundary values for North America produce a spatially uniform deterioration of the model accuracy across the majority of the continent.

  6. The potential for geostationary remote sensing of NO2 to improve weather prediction

    NASA Astrophysics Data System (ADS)

    Liu, X.; Mizzi, A. P.; Anderson, J. L.; Fung, I. Y.; Cohen, R. C.

    2017-12-01

    Observations of surface winds remain sparse making it challenging to simulate and predict the weather in circumstances of light winds that are most important for poor air quality. Direct measurements of short-lived chemicals from space might be a solution to this challenge. Here we investigate the application of data assimilation of NO­2 columns as will be observed from geostationary orbit to improve predictions and retrospective analysis of surface wind fields. Specifically, synthetic NO2 observations are sampled from a "nature run (NR)" regarded as the true atmosphere. Then NO2 observations are assimilated using EAKF methods into a "control run (CR)" which differs from the NR in the wind field. Wind errors are generated by introducing (1) errors in the initial conditions, (2) creating a model error by using two different formulations for the planetary boundary layer, (3) and by combining both of these effects. Assimilation of NO2 column observations succeeds in reducing wind errors, indicating the prospects for future geostationary atmospheric composition measurements to improve weather forecasting are substantial. We find that due to the temporal heterogeneity of wind errors, the success of this application favors chemical observations of high frequency, such as those from geostationary platform. We also show the potential to improve soil moisture field by assimilating NO­2 columns.

  7. Experimental evaluation of nonclassical correlations between measurement outcomes and target observable in a quantum measurement

    NASA Astrophysics Data System (ADS)

    Iinuma, Masataka; Suzuki, Yutaro; Nii, Taiki; Kinoshita, Ryuji; Hofmann, Holger F.

    2016-03-01

    In general, it is difficult to evaluate measurement errors when the initial and final conditions of the measurement make it impossible to identify the correct value of the target observable. Ozawa proposed a solution based on the operator algebra of observables which has recently been used in experiments investigating the error-disturbance trade-off of quantum measurements. Importantly, this solution makes surprisingly detailed statements about the relations between measurement outcomes and the unknown target observable. In the present paper, we investigate this relation by performing a sequence of two measurements on the polarization of a photon, so that the first measurement commutes with the target observable and the second measurement is sensitive to a complementary observable. While the initial measurement can be evaluated using classical statistics, the second measurement introduces the effects of quantum correlations between the noncommuting physical properties. By varying the resolution of the initial measurement, we can change the relative contribution of the nonclassical correlations and identify their role in the evaluation of the quantum measurement. It is shown that the most striking deviation from classical expectations is obtained at the transition between weak and strong measurements, where the competition between different statistical effects results in measurement values well outside the range of possible eigenvalues.

  8. A statistical data assimilation method for seasonal streamflow forecasting to optimize hydropower reservoir management in data-scarce regions

    NASA Astrophysics Data System (ADS)

    Arsenault, R.; Mai, J.; Latraverse, M.; Tolson, B.

    2017-12-01

    Probabilistic ensemble forecasts generated by the ensemble streamflow prediction (ESP) methodology are subject to biases due to errors in the hydrological model's initial states. In day-to-day operations, hydrologists must compensate for discrepancies between observed and simulated states such as streamflow. However, in data-scarce regions, little to no information is available to guide the streamflow assimilation process. The manual assimilation process can then lead to more uncertainty due to the numerous options available to the forecaster. Furthermore, the model's mass balance may be compromised and could affect future forecasts. In this study we propose a data-driven approach in which specific variables that may be adjusted during assimilation are defined. The underlying principle was to identify key variables that would be the most appropriate to modify during streamflow assimilation depending on the initial conditions such as the time period of the assimilation, the snow water equivalent of the snowpack and meteorological conditions. The variables to adjust were determined by performing an automatic variational data assimilation on individual (or combinations of) model state variables and meteorological forcing. The assimilation aimed to simultaneously optimize: (1) the error between the observed and simulated streamflow at the timepoint where the forecasts starts and (2) the bias between medium to long-term observed and simulated flows, which were simulated by running the model with the observed meteorological data on a hindcast period. The optimal variables were then classified according to the initial conditions at the time period where the forecast is initiated. The proposed method was evaluated by measuring the average electricity generation of a hydropower complex in Québec, Canada driven by this method. A test-bed which simulates the real-world assimilation, forecasting, water release optimization and decision-making of a hydropower cascade was developed to assess the performance of each individual process in the reservoir management chain. Here the proposed method was compared to the PF algorithm while keeping all other elements intact. Preliminary results are encouraging in terms of power generation and robustness for the proposed approach.

  9. Influencing Factors of the Initiation Point in the Parachute-Bomb Dynamic Detonation System

    NASA Astrophysics Data System (ADS)

    Qizhong, Li; Ye, Wang; Zhongqi, Wang; Chunhua, Bai

    2017-12-01

    The parachute system has been widely applied in modern armament design, especially for the fuel-air explosives. Because detonation of fuel-air explosives occurs during flight, it is necessary to investigate the influences of the initiation point to ensure successful dynamic detonation. In fact, the initiating position exist the falling area in the fuels, due to the error of influencing factors. In this paper, the major influencing factors of initiation point were explored with airdrop and the regularity between initiation point area and factors were obtained. Based on the regularity, the volume equation of initiation point area was established to predict the range of initiation point in the fuel. The analysis results showed that the initiation point appeared area, scattered on account of the error of attitude angle, secondary initiation charge velocity, and delay time. The attitude angle was the major influencing factors on a horizontal axis. On the contrary, secondary initiation charge velocity and delay time were the major influencing factors on a horizontal axis. Overall, the geometries of initiation point area were sector coupled with the errors of the attitude angle, secondary initiation charge velocity, and delay time.

  10. Current pulse: can a production system reduce medical errors in health care?

    PubMed

    Printezis, Antonios; Gopalakrishnan, Mohan

    2007-01-01

    One of the reasons for rising health care costs is medical errors, a majority of which result from faulty systems and processes. Health care in the past has used process-based initiatives such as Total Quality Management, Continuous Quality Improvement, and Six Sigma to reduce errors. These initiatives to redesign health care, reduce errors, and improve overall efficiency and customer satisfaction have had moderate success. Current trend is to apply the successful Toyota Production System (TPS) to health care since its organizing principles have led to tremendous improvement in productivity and quality for Toyota and other businesses that have adapted them. This article presents insights on the effectiveness of TPS principles in health care and the challenges that lie ahead in successfully integrating this approach with other quality initiatives.

  11. Experimental magic state distillation for fault-tolerant quantum computing.

    PubMed

    Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond

    2011-01-25

    Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.

  12. Interpolating moving least-squares methods for fitting potential energy surfaces: using classical trajectories to explore configuration space.

    PubMed

    Dawes, Richard; Passalacqua, Alessio; Wagner, Albert F; Sewell, Thomas D; Minkoff, Michael; Thompson, Donald L

    2009-04-14

    We develop two approaches for growing a fitted potential energy surface (PES) by the interpolating moving least-squares (IMLS) technique using classical trajectories. We illustrate both approaches by calculating nitrous acid (HONO) cis-->trans isomerization trajectories under the control of ab initio forces from low-level HF/cc-pVDZ electronic structure calculations. In this illustrative example, as few as 300 ab initio energy/gradient calculations are required to converge the isomerization rate constant at a fixed energy to approximately 10%. Neither approach requires any preliminary electronic structure calculations or initial approximate representation of the PES (beyond information required for trajectory initial conditions). Hessians are not required. Both approaches rely on the fitting error estimation properties of IMLS fits. The first approach, called IMLS-accelerated direct dynamics, propagates individual trajectories directly with no preliminary exploratory trajectories. The PES is grown "on the fly" with the computation of new ab initio data only when a fitting error estimate exceeds a prescribed tight tolerance. The second approach, called dynamics-driven IMLS fitting, uses relatively inexpensive exploratory trajectories to both determine and fit the dynamically accessible configuration space. Once exploratory trajectories no longer find configurations with fitting error estimates higher than the designated accuracy, the IMLS fit is considered to be complete and usable in classical trajectory calculations or other applications.

  13. Verification of an ensemble prediction system for storm surge forecast in the Adriatic Sea

    NASA Astrophysics Data System (ADS)

    Mel, Riccardo; Lionello, Piero

    2014-12-01

    In the Adriatic Sea, storm surges present a significant threat to Venice and to the flat coastal areas of the northern coast of the basin. Sea level forecast is of paramount importance for the management of daily activities and for operating the movable barriers that are presently being built for the protection of the city. In this paper, an EPS (ensemble prediction system) for operational forecasting of storm surge in the northern Adriatic Sea is presented and applied to a 3-month-long period (October-December 2010). The sea level EPS is based on the HYPSE (hydrostatic Padua Sea elevation) model, which is a standard single-layer nonlinear shallow water model, whose forcings (mean sea level pressure and surface wind fields) are provided by the ensemble members of the ECMWF (European Center for Medium-Range Weather Forecasts) EPS. Results are verified against observations at five tide gauges located along the Croatian and Italian coasts of the Adriatic Sea. Forecast uncertainty increases with the predicted value of the storm surge and with the forecast lead time. The EMF (ensemble mean forecast) provided by the EPS has a rms (root mean square) error lower than the DF (deterministic forecast), especially for short (up to 3 days) lead times. Uncertainty for short lead times of the forecast and for small storm surges is mainly caused by uncertainty of the initial condition of the hydrodynamical model. Uncertainty for large lead times and large storm surges is mainly caused by uncertainty in the meteorological forcings. The EPS spread increases with the rms error of the forecast. For large lead times the EPS spread and the forecast error substantially coincide. However, the EPS spread in this study, which does not account for uncertainty in the initial condition, underestimates the error during the early part of the forecast and for small storm surge values. On the contrary, it overestimates the rms error for large surge values. The PF (probability forecast) of the EPS has a clear skill in predicting the actual probability distribution of sea level, and it outperforms simple "dressed" PF methods. A probability estimate based on the single DF is shown to be inadequate. However, a PF obtained with a prescribed Gaussian distribution and centered on the DF value performs very similarly to the EPS-based PF.

  14. Performance analysis of air-water quantum key distribution with an irregular sea surface

    NASA Astrophysics Data System (ADS)

    Xu, Hua-bin; Zhou, Yuan-yuan; Zhou, Xue-jun; Wang, Lian

    2018-05-01

    In the air-water quantum key distribution (QKD), the irregular sea surface has some influence on the photon polarization state. The wind is considered as the main factor causing the irregularity, so the model of irregular sea surface based on the wind speed is adopted. The relationships of the quantum bit error rate with the wind speed and the initial incident angle are simulated. Therefore, the maximum secure transmission depth of QKD is confirmed, and the limitation of the wind speed and the initial incident angle is determined. The simulation results show that when the wind speed and the initial incident angle increase, the performance of QKD will fall down. Under the intercept-resend attack condition, the maximum safe transmission depth of QKD is up to 105 m. To realize safe communications in the safe diving depth of submarines (100 m), the initial incident angle is requested to be not exceeding 26°, and with the initial incident angle increased, the limitation of wind speed is decreased.

  15. Extended Range Prediction of Indian Summer Monsoon: Current status

    NASA Astrophysics Data System (ADS)

    Sahai, A. K.; Abhilash, S.; Borah, N.; Joseph, S.; Chattopadhyay, R.; S, S.; Rajeevan, M.; Mandal, R.; Dey, A.

    2014-12-01

    The main focus of this study is to develop forecast consensus in the extended range prediction (ERP) of monsoon Intraseasonal oscillations using a suit of different variants of Climate Forecast system (CFS) model. In this CFS based Grand MME prediction system (CGMME), the ensemble members are generated by perturbing the initial condition and using different configurations of CFSv2. This is to address the role of different physical mechanisms known to have control on the error growth in the ERP in the 15-20 day time scale. The final formulation of CGMME is based on 21 ensembles of the standalone Global Forecast System (GFS) forced with bias corrected forecasted SST from CFS, 11 low resolution CFST126 and 11 high resolution CFST382. Thus, we develop the multi-model consensus forecast for the ERP of Indian summer monsoon (ISM) using a suite of different variants of CFS model. This coordinated international effort lead towards the development of specific tailor made regional forecast products over Indian region. Skill of deterministic and probabilistic categorical rainfall forecast as well the verification of large-scale low frequency monsoon intraseasonal oscillations has been carried out using hindcast from 2001-2012 during the monsoon season in which all models are initialized at every five days starting from 16May to 28 September. The skill of deterministic forecast from CGMME is better than the best participating single model ensemble configuration (SME). The CGMME approach is believed to quantify the uncertainty in both initial conditions and model formulation. Main improvement is attained in probabilistic forecast which is because of an increase in the ensemble spread, thereby reducing the error due to over-confident ensembles in a single model configuration. For probabilistic forecast, three tercile ranges are determined by ranking method based on the percentage of ensemble members from all the participating models falls in those three categories. CGMME further added value to both deterministic and probability forecast compared to raw SME's and this better skill is probably flows from large spread and improved spread-error relationship. CGMME system is currently capable of generating ER prediction in real time and successfully delivering its experimental operational ER forecast of ISM for the last few years.

  16. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    PubMed

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this hypothesis.

  17. A high-accuracy two-position alignment inertial navigation system for lunar rovers aided by a star sensor with a calibration and positioning function

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming

    2016-12-01

    An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.

  18. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  19. Spectroscopic ellipsometer based on direct measurement of polarization ellipticity.

    PubMed

    Watkins, Lionel R

    2011-06-20

    A polarizer-sample-Wollaston prism analyzer ellipsometer is described in which the ellipsometric angles ψ and Δ are determined by direct measurement of the elliptically polarized light reflected from the sample. With the Wollaston prism initially set to transmit p- and s-polarized light, the azimuthal angle P of the polarizer is adjusted until the two beams have equal intensity. This condition yields ψ=±P and ensures that the reflected elliptically polarized light has an azimuthal angle of ±45° and maximum ellipticity. Rotating the Wollaston prism through 45° and adjusting the analyzer azimuth until the two beams again have equal intensity yields the ellipticity that allows Δ to be determined via a simple linear relationship. The errors produced by nonideal components are analyzed. We show that the polarizer dominates these errors but that for most practical purposes, the error in ψ is negligible and the error in Δ may be corrected exactly. A native oxide layer on a silicon substrate was measured at a single wavelength and multiple angles of incidence and spectroscopically at a single angle of incidence. The best fit film thicknesses obtained were in excellent agreement with those determined using a traditional null ellipsometer.

  20. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  1. Determination of Trajectories for a Gliding Parachute System

    DTIC Science & Technology

    1975-04-01

    use of such items. Destroy this report when no longer needed. Do not return it to the originator. * jiT >T^>’TyT>’V’v,>T>’*.y^jrw\\gwJ^ "-’ v*»’v -*iv...The terminal error serves as a penalty function which penalizes undesirable terminal states by the extent to which they deviate from the desired...the open-loop control becomes, in effect , feedback control with the sequence of initial conditions serving as the current state. The major advantage

  2. Eye of the Beholder: Stage Entrance Behavior and Facial Expression Affect Continuous Quality Ratings in Music Performance

    PubMed Central

    Waddell, George; Williamon, Aaron

    2017-01-01

    Judgments of music performance quality are commonly employed in music practice, education, and research. However, previous studies have demonstrated the limited reliability of such judgments, and there is now evidence that extraneous visual, social, and other “non-musical” features can unduly influence them. The present study employed continuous measurement techniques to examine how the process of forming a music quality judgment is affected by the manipulation of temporally specific visual cues. Video footage comprising an appropriate stage entrance and error-free performance served as the standard condition (Video 1). This footage was manipulated to provide four additional conditions, each identical save for a single variation: an inappropriate stage entrance (Video 2); the presence of an aural performance error midway through the piece (Video 3); the same error accompanied by a negative facial reaction by the performer (Video 4); the facial reaction with no corresponding aural error (Video 5). The participants were 53 musicians and 52 non-musicians (N = 105) who individually assessed the performance quality of one of the five randomly assigned videos via a digital continuous measurement interface and headphones. The results showed that participants viewing the “inappropriate” stage entrance made judgments significantly more quickly than those viewing the “appropriate” entrance, and while the poor entrance caused significantly lower initial scores among those with musical training, the effect did not persist long into the performance. The aural error caused an immediate drop in quality judgments that persisted to a lower final score only when accompanied by the frustrated facial expression from the pianist; the performance error alone caused a temporary drop only in the musicians' ratings, and the negative facial reaction alone caused no reaction regardless of participants' musical experience. These findings demonstrate the importance of visual information in forming evaluative and aesthetic judgments in musical contexts and highlight how visual cues dynamically influence those judgments over time. PMID:28487662

  3. A fast-initializing digital equalizer with on-line tracking for data communications

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Barksdale, W. J.

    1974-01-01

    A theory is developed for a digital equalizer for use in reducing intersymbol interference (ISI) on high speed data communications channels. The equalizer is initialized with a single isolated transmitter pulse, provided the signal-to-noise ratio (SNR) is not unusually low, then switches to a decision directed, on-line mode of operation that allows tracking of channel variations. Conditions for optimal tap-gain settings are obtained first for a transversal equalizer structure by using a mean squared error (MSE) criterion, a first order gradient algorithm to determine the adjustable equalizer tap-gains, and a sequence of isolated initializing pulses. Since the rate of tap-gain convergence depends on the eigenvalues of a channel output correlation matrix, convergence can be improved by making a linear transformation on to obtain a new correlation matrix.

  4. Task motivation influences alpha suppression following errors.

    PubMed

    Compton, Rebecca J; Bissey, Bryn; Worby-Selim, Sharoda

    2014-07-01

    The goal of the present research is to examine the influence of motivation on a novel error-related neural marker, error-related alpha suppression (ERAS). Participants completed an attentionally demanding flanker task under conditions that emphasized either speed or accuracy or under conditions that manipulated the monetary value of errors. Conditions in which errors had greater motivational value produced greater ERAS, that is, greater alpha suppression following errors compared to correct trials. A second study found that a manipulation of task difficulty did not affect ERAS. Together, the results confirm that ERAS is both a robust phenomenon and one that is sensitive to motivational factors. Copyright © 2014 Society for Psychophysiological Research.

  5. The Relationship between Occurrence Timing of Dispensing Errors and Subsequent Danger to Patients under the Situation According to the Classification of Drugs by Efficacy.

    PubMed

    Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro

    2016-01-01

    There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.

  6. Atmospheric Ascent Guidance for Rocket-Powered Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Dukeman, Greg A.

    2002-01-01

    An advanced ascent guidance algorithm for rocket- powered launch vehicles is developed. This algorithm cyclically solves the calculus-of-variations two-point boundary-value problem starting at vertical rise completion through main engine cutoff. This is different from traditional ascent guidance algorithms which operate in a simple open-loop mode until high dynamic pressure (including the critical max-Q) portion of the trajectory is over, at which time guidance operates under the assumption of negligible aerodynamic acceleration (i.e., vacuum dynamics). The initial costate guess is corrected based on errors in the terminal state constraints and the transversality conditions. Judicious approximations are made to reduce the order and complexity of the state/costate system. Results comparing guided launch vehicle trajectories with POST open-loop trajectories are given verifying the basic formulation of the algorithm. Multiple shooting is shown to be a very effective numerical technique for this application. In particular, just one intermediate shooting point, in addition to the initial shooting point, is sufficient to significantly reduce sensitivity to the guessed initial costates. Simulation results from a high-fidelity trajectory simulation are given for the case of launch to sub-orbital cutoff conditions as well as launch to orbit conditions. An abort to downrange landing site formulation of the algorithm is presented.

  7. The Greenwich Photo-heliographic Results (1874 - 1976): Initial Corrections to the Printed Publications

    NASA Astrophysics Data System (ADS)

    Erwin, E. H.; Coffey, H. E.; Denig, W. F.; Willis, D. M.; Henwood, R.; Wild, M. N.

    2013-11-01

    A new sunspot and faculae digital dataset for the interval 1874 - 1955 has been prepared under the auspices of the NOAA National Geophysical Data Center (NGDC). This digital dataset contains measurements of the positions and areas of both sunspots and faculae published initially by the Royal Observatory, Greenwich, and subsequently by the Royal Greenwich Observatory (RGO), under the title Greenwich Photo-heliographic Results ( GPR) , 1874 - 1976. Quality control (QC) procedures based on logical consistency have been used to identify the more obvious errors in the RGO publications. Typical examples of identifiable errors are North versus South errors in specifying heliographic latitude, errors in specifying heliographic (Carrington) longitude, errors in the dates and times, errors in sunspot group numbers, arithmetic errors in the summation process, and the occasional omission of solar ephemerides. Although the number of errors in the RGO publications is remarkably small, an initial table of necessary corrections is provided for the interval 1874 - 1917. Moreover, as noted in the preceding companion papers, the existence of two independently prepared digital datasets, which both contain information on sunspot positions and areas, makes it possible to outline a preliminary strategy for the development of an even more accurate digital dataset. Further work is in progress to generate an extremely reliable sunspot digital dataset, based on the long programme of solar observations supported first by the Royal Observatory, Greenwich, and then by the Royal Greenwich Observatory.

  8. The Sources of Error in Spanish Writing.

    ERIC Educational Resources Information Center

    Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.

    1999-01-01

    Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)

  9. The impact of underwater glider observations in the forecast of Hurricane Gonzalo (2014)

    NASA Astrophysics Data System (ADS)

    Goni, G. J.; Domingues, R. M.; Kim, H. S.; Domingues, R. M.; Halliwell, G. R., Jr.; Bringas, F.; Morell, J. M.; Pomales, L.; Baltes, R.

    2017-12-01

    The tropical Atlantic basin is one of seven global regions where tropical cyclones (TC) are commonly observed to originate and intensify from June to November. On average, approximately 12 TCs travel through the region every year, frequently affecting coastal, and highly populated areas. In an average year, 2 to 3 of them are categorized as intense hurricanes. Given the appropriate atmospheric conditions, TC intensification has been linked to ocean conditions, such as increased ocean heat content and enhanced salinity stratification near the surface. While errors in hurricane track forecasts have been reduced during the last years, errors in intensity forecasts remain mostly unchanged. Several studies have indicated that the use of in situ observations has the potential to improve the representation of the ocean to correctly initialize coupled hurricane intensity forecast models. However, a sustained in situ ocean observing system in the tropical North Atlantic Ocean and Caribbean Sea dedicated to measuring subsurface thermal and salinity fields in support of TC intensity studies and forecasts has yet to be implemented. Autonomous technologies offer new and cost-effective opportunities to accomplish this objective. We highlight here a partnership effort that utilize underwater gliders to better understand air-sea processes during high wind events, and are particularly geared towards improving hurricane intensity forecasts. Results are presented for Hurricane Gonzalo (2014), where glider observations obtained in the tropical Atlantic: Helped to provide an accurate description of the upper ocean conditions, that included the presence of a low salinity barrier layer; Allowed a detailed analysis of the upper ocean response to hurricane force winds of Gonzalo; Improved the initialization of the ocean in a coupled ocean-atmosphere numerical model; and together with observations from other ocean observing platforms, substantially reduced the error in intensity forecast using the HYCOM-HWRF model. Data collected by this project are transmitted in real-time to the Global Telecommunication System, distributed through the institutional web pages, by the IOOS Glider Data Assembly Center, and by NCEI, and assimilated in real-time numerical weather forecast models.

  10. Exploring the initial steps of the testing process: frequency and nature of pre-preanalytic errors.

    PubMed

    Carraro, Paolo; Zago, Tatiana; Plebani, Mario

    2012-03-01

    Few data are available on the nature of errors in the so-called pre-preanalytic phase, the initial steps of the testing process. We therefore sought to evaluate pre-preanalytic errors using a study design that enabled us to observe the initial procedures performed in the ward, from the physician's test request to the delivery of specimens in the clinical laboratory. After a 1-week direct observational phase designed to identify the operating procedures followed in 3 clinical wards, we recorded all nonconformities and errors occurring over a 6-month period. Overall, the study considered 8547 test requests, for which 15 917 blood sample tubes were collected and 52 982 tests undertaken. No significant differences in error rates were found between the observational phase and the overall study period, but underfilling of coagulation tubes was found to occur more frequently in the direct observational phase (P = 0.043). In the overall study period, the frequency of errors was found to be particularly high regarding order transmission [29 916 parts per million (ppm)] and hemolysed samples (2537 ppm). The frequency of patient misidentification was 352 ppm, and the most frequent nonconformities were test requests recorded in the diary without the patient's name and failure to check the patient's identity at the time of blood draw. The data collected in our study confirm the relative frequency of pre-preanalytic errors and underline the need to consensually prepare and adopt effective standard operating procedures in the initial steps of laboratory testing and to monitor compliance with these procedures over time.

  11. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  12. A geometric model for initial orientation errors in pigeon navigation.

    PubMed

    Postlethwaite, Claire M; Walker, Michael M

    2011-01-21

    All mobile animals respond to gradients in signals in their environment, such as light, sound, odours and magnetic and electric fields, but it remains controversial how they might use these signals to navigate over long distances. The Earth's surface is essentially two-dimensional, so two stimuli are needed to act as coordinates for navigation. However, no environmental fields are known to be simple enough to act as perpendicular coordinates on a two-dimensional grid. Here, we propose a model for navigation in which we assume that an animal has a simplified 'cognitive map' in which environmental stimuli act as perpendicular coordinates. We then investigate how systematic deviation of the contour lines of the environmental signals from a simple orthogonal arrangement can cause errors in position determination and lead to systematic patterns of directional errors in initial homing directions taken by pigeons. The model reproduces patterns of initial orientation errors seen in previously collected data from homing pigeons, predicts that errors should increase with distance from the loft, and provides a basis for efforts to identify further sources of orientation errors made by homing pigeons. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Effective force control by muscle synergies.

    PubMed

    Berger, Denise J; d'Avella, Andrea

    2014-01-01

    Muscle synergies have been proposed as a way for the central nervous system (CNS) to simplify the generation of motor commands and they have been shown to explain a large fraction of the variation in the muscle patterns across a variety of conditions. However, whether human subjects are able to control forces and movements effectively with a small set of synergies has not been tested directly. Here we show that muscle synergies can be used to generate target forces in multiple directions with the same accuracy achieved using individual muscles. We recorded electromyographic (EMG) activity from 13 arm muscles and isometric hand forces during a force reaching task in a virtual environment. From these data we estimated the force associated to each muscle by linear regression and we identified muscle synergies by non-negative matrix factorization. We compared trajectories of a virtual mass displaced by the force estimated using the entire set of recorded EMGs to trajectories obtained using 4-5 muscle synergies. While trajectories were similar, when feedback was provided according to force estimated from recorded EMGs (EMG-control) on average trajectories generated with the synergies were less accurate. However, when feedback was provided according to recorded force (force-control) we did not find significant differences in initial angle error and endpoint error. We then tested whether synergies could be used as effectively as individual muscles to control cursor movement in the force reaching task by providing feedback according to force estimated from the projection of the recorded EMGs into synergy space (synergy-control). Human subjects were able to perform the task immediately after switching from force-control to EMG-control and synergy-control and we found no differences between initial movement direction errors and endpoint errors in all control modes. These results indicate that muscle synergies provide an effective strategy for motor coordination.

  14. An exploration of Australian hospital pharmacists' attitudes to patient safety.

    PubMed

    Lalor, Daniel J; Chen, Timothy F; Walpola, Ramesh; George, Rachel A; Ashcroft, Darren M; Fois, Romano A

    2015-02-01

    To explore the attitudes of Australian hospital pharmacists towards patient safety in their work settings. A safety climate questionnaire was administered to all 2347 active members of the Society of Hospital Pharmacists of Australia in 2010. Part of the survey elicited free-text comments about patient safety, error and incident reporting. The comments were subjected to thematic analysis to determine the attitudes held by respondents in relation to patient safety and its quality management in their work settings. Two hundred and ten (210) of 643 survey respondents provided comments on safety and quality issues related to their work settings. The responses contained a number of dominant themes including issues of workforce and working conditions, incident reporting systems, the response when errors occur, the presence or absence of a blame culture, hospital management support for safety initiatives, openness about errors and the value of teamwork. A number of pharmacists described the development of a mature patient-safety culture - one that is open about reporting errors and active in reducing their occurrence. Others described work settings in which a culture of blame persists, stifling error reporting and ultimately compromising patient safety. Australian hospital pharmacists hold a variety of attitudes that reflect diverse workplace cultures towards patient safety, error and incident reporting. This study has provided an insight into these attitudes and the actions that are needed to improve the patient-safety culture within Australian hospital pharmacy work settings. © 2014 Royal Pharmaceutical Society.

  15. Error Estimate of the Ares I Vehicle Longitudinal Aerodynamic Characteristics Based on Turbulent Navier-Stokes Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2011-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on the unstructured grid, Reynolds-averaged Navier-Stokes flow solver USM3D, with an assumption that the flow is fully turbulent over the entire vehicle. This effort was designed to complement the prior computational activities conducted over the past five years in support of the Ares I Project with the emphasis on the vehicle s last design cycle designated as the A106 configuration. Due to a lack of flight data for this particular design s outer mold line, the initial vehicle s aerodynamic predictions and the associated error estimates were first assessed and validated against the available experimental data at representative wind tunnel flow conditions pertinent to the ascent phase of the trajectory without including any propulsion effects. Subsequently, the established procedures were then applied to obtain the longitudinal aerodynamic predictions at the selected flight flow conditions. Sample computed results and the correlations with the experimental measurements are presented. In addition, the present analysis includes the relevant data to highlight the balance between the prediction accuracy against the grid size and, thus, the corresponding computer resource requirements for the computations at both wind tunnel and flight flow conditions. NOTE: Some details have been removed from selected plots and figures in compliance with the sensitive but unclassified (SBU) restrictions. However, the content still conveys the merits of the technical approach and the relevant results.

  16. Fast forward kinematics algorithm for real-time and high-precision control of the 3-RPS parallel mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Yu, Jingjun; Pei, Xu

    2018-06-01

    A new forward kinematics algorithm for the mechanism of 3-RPS (R: Revolute; P: Prismatic; S: Spherical) parallel manipulators is proposed in this study. This algorithm is primarily based on the special geometric conditions of the 3-RPS parallel mechanism, and it eliminates the errors produced by parasitic motions to improve and ensure accuracy. Specifically, the errors can be less than 10-6. In this method, only the group of solutions that is consistent with the actual situation of the platform is obtained rapidly. This algorithm substantially improves calculation efficiency because the selected initial values are reasonable, and all the formulas in the calculation are analytical. This novel forward kinematics algorithm is well suited for real-time and high-precision control of the 3-RPS parallel mechanism.

  17. Effects of control parameters of three-point initiation on the formation of an explosively formed projectile with fins

    NASA Astrophysics Data System (ADS)

    Li, R.; Li, W. B.; Wang, X. M.; Li, W. B.

    2018-03-01

    The effects of the initiation diameter and synchronicity error on the formation of fins and stable-flight velocity of an explosively formed projectile (EFP) with three-point initiation are investigated. The pressure and area of the Mach wave acting on the metal liner at different initiation diameters are calculated employing the Whitham method. LS-DYNA software is used to investigate the asymmetric collision of detonation waves resulting from three-point initiation synchronicity error, the distortion characteristics of the liner resulting from the composite detonation waves, and the performance parameters of the EFP with fins. Results indicate that deviations of the Y-shaped high-pressure zone and central ultrahigh-pressure zone from the liner center can be attributed to the error of three-point initiation, which leads to the irregular formation of EFP fins. It is noted that the area of the Mach wave decreases, but the pressure of the Mach wave and the final speed and length-to-diameter ( L/ D) ratio of the EFP increase, benefiting the formation of the EFP fins, as the initiation diameter increases.

  18. The Sustained Influence of an Error on Future Decision-Making.

    PubMed

    Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel

    2017-01-01

    Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  19. Intercepting moving targets: does memory from practice in a specific condition of target displacement affect movement timing?

    PubMed

    de Azevedo Neto, Raymundo Machado; Teixeira, Luis Augusto

    2011-05-01

    This investigation aimed at assessing the extent to which memory from practice in a specific condition of target displacement modulates temporal errors and movement timing of interceptive movements. We compared two groups practicing with certainty of future target velocity either in unchanged target velocity or in target velocity decrease. Following practice, both experimental groups were probed in the situations of unchanged target velocity and target velocity decrease either under the context of certainty or uncertainty about target velocity. Results from practice showed similar improvement of temporal accuracy between groups, revealing that target velocity decrease did not disturb temporal movement organization when fully predictable. Analysis of temporal errors in the probing trials indicated that both groups had higher timing accuracy in velocity decrease in comparison with unchanged velocity. Effect of practice was detected by increased temporal accuracy of the velocity decrease group in situations of decreased velocity; a trend consistent with the expected effect of practice was observed for temporal errors in the unchanged velocity group and in movement initiation at a descriptive level. An additional point of theoretical interest was the fast adaptation in both groups to a target velocity pattern different from that practiced. These points are discussed under the perspective of integration of vision and motor control by means of an internal forward model of external motion.

  20. Factors that enhance English-speaking speech-language pathologists' transcription of Cantonese-speaking children's consonants.

    PubMed

    Lockart, Rebekah; McLeod, Sharynne

    2013-08-01

    To investigate speech-language pathology students' ability to identify errors and transcribe typical and atypical speech in Cantonese, a nonnative language. Thirty-three English-speaking speech-language pathology students completed 3 tasks in an experimental within-subjects design. Task 1 (baseline) involved transcribing English words. In Task 2, students transcribed 25 words spoken by a Cantonese adult. An average of 59.1% consonants was transcribed correctly (72.9% when Cantonese-English transfer patterns were allowed). There was higher accuracy on shared English and Cantonese syllable-initial consonants /m,n,f,s,h,j,w,l/ and syllable-final consonants. In Task 3, students identified consonant errors and transcribed 100 words spoken by Cantonese-speaking children under 4 additive conditions: (1) baseline, (2) +adult model, (3) +information about Cantonese phonology, and (4) all variables (2 and 3 were counterbalanced). There was a significant improvement in the students' identification and transcription scores for conditions 2, 3, and 4, with a moderate effect size. Increased skill was not based on listeners' proficiency in speaking another language, perceived transcription skill, musicality, or confidence with multilingual clients. Speech-language pathology students, with no exposure to or specific training in Cantonese, have some skills to identify errors and transcribe Cantonese. Provision of a Cantonese-adult model and information about Cantonese phonology increased students' accuracy in transcribing Cantonese speech.

  1. Assessing concentration uncertainty estimates from passive microwave sea ice products

    NASA Astrophysics Data System (ADS)

    Meier, W.; Brucker, L.; Miller, J. A.

    2017-12-01

    Sea ice concentration is an essential climate variable and passive microwave derived estimates of concentration are one of the longest satellite-derived climate records. However, until recently uncertainty estimates were not provided. Numerous validation studies provided insight into general error characteristics, but the studies have found that concentration error varied greatly depending on sea ice conditions. Thus, an uncertainty estimate from each observation is desired, particularly for initialization, assimilation, and validation of models. Here we investigate three sea ice products that include an uncertainty for each concentration estimate: the NASA Team 2 algorithm product, the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI-SAF) product, and the NOAA/NSIDC Climate Data Record (CDR) product. Each product estimates uncertainty with a completely different approach. The NASA Team 2 product derives uncertainty internally from the algorithm method itself. The OSI-SAF uses atmospheric reanalysis fields and a radiative transfer model. The CDR uses spatial variability from two algorithms. Each approach has merits and limitations. Here we evaluate the uncertainty estimates by comparing the passive microwave concentration products with fields derived from the NOAA VIIRS sensor. The results show that the relationship between the product uncertainty estimates and the concentration error (relative to VIIRS) is complex. This may be due to the sea ice conditions, the uncertainty methods, as well as the spatial and temporal variability of the passive microwave and VIIRS products.

  2. Saccades to remembered targets: the effects of smooth pursuit and illusory stimulus motion

    NASA Technical Reports Server (NTRS)

    Zivotofsky, A. Z.; Rottach, K. G.; Averbuch-Heller, L.; Kori, A. A.; Thomas, C. W.; Dell'Osso, L. F.; Leigh, R. J.

    1996-01-01

    1. Measurements were made in four normal human subjects of the accuracy of saccades to remembered locations of targets that were flashed on a 20 x 30 deg random dot display that was either stationary or moving horizontally and sinusoidally at +/-9 deg at 0.3 Hz. During the interval between the target flash and the memory-guided saccade, the "memory period" (1.4 s), subjects either fixated a stationary spot or pursued a spot moving vertically sinusoidally at +/-9 deg at 0.3 Hz. 2. When saccades were made toward the location of targets previously flashed on a stationary background as subjects fixated the stationary spot, median saccadic error was 0.93 deg horizontally and 1.1 deg vertically. These errors were greater than for saccades to visible targets, which had median values of 0.59 deg horizontally and 0.60 deg vertically. 3. When targets were flashed as subjects smoothly pursued a spot that moved vertically across the stationary background, median saccadic error was 1.1 deg horizontally and 1.2 deg vertically, thus being of similar accuracy to when targets were flashed during fixation. In addition, the vertical component of the memory-guided saccade was much more closely correlated with the "spatial error" than with the "retinal error"; this indicated that, when programming the saccade, the brain had taken into account eye movements that occurred during the memory period. 4. When saccades were made to targets flashed during attempted fixation of a stationary spot on a horizontally moving background, a condition that produces a weak Duncker-type illusion of horizontal movement of the primary target, median saccadic error increased horizontally to 3.2 deg but was 1.1 deg vertically. 5. When targets were flashed as subjects smoothly pursued a spot that moved vertically on the horizontally moving background, a condition that induces a strong illusion of diagonal target motion, median saccadic error was 4.0 deg horizontally and 1.5 deg vertically; thus the horizontal error was greater than under any other experimental condition. 6. In most trials, the initial saccade to the remembered target was followed by additional saccades while the subject was still in darkness. These secondary saccades, which were executed in the absence of visual feedback, brought the eye closer to the target location. During paradigms involving horizontal background movement, these corrections were more prominent horizontally than vertically. 7. Further measurements were made in two subjects to determine whether inaccuracy of memory-guided saccades, in the horizontal plane, was due to mislocalization at the time that the target flashed, misrepresentation of the trajectory of the pursuit eye movement during the memory period, or both. 8. The magnitude of the saccadic error, both with and without corrections made in darkness, was mislocalized by approximately 30% of the displacement of the background at the time that the target flashed. The magnitude of the saccadic error also was influenced by net movement of the background during the memory period, corresponding to approximately 25% of net background movement for the initial saccade and approximately 13% for the final eye position achieved in darkness. 9. We formulated simple linear models to test specific hypotheses about which combinations of signals best describe the observed saccadic amplitudes. We tested the possibilities that the brain made an accurate memory of target location and a reliable representation of the eye movement during the memory period, or that one or both of these was corrupted by the illusory visual stimulus. Our data were best accounted for by a model in which both the working memory of target location and the internal representation of the horizontal eye movements were corrupted by the illusory visual stimulus. We conclude that extraretinal signals played only a minor role, in comparison with visual estimates of the direction of gaze, in planning eye movements to remembered targ.

  3. Simplified Approach Charts Improve Data Retrieval Performance

    PubMed Central

    Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.

    2016-01-01

    The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James; Kuruganti, Teja

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  5. The potential for geostationary remote sensing of NO2 to improve weather prediction

    NASA Astrophysics Data System (ADS)

    Liu, X.; Mizzi, A. P.; Anderson, J. L.; Fung, I. Y.; Cohen, R. C.

    2016-12-01

    Observations of surface winds remain sparse making it challenging to simulate and predict the weather in circumstances of light winds that are most important for poor air quality. Direct measurements of short-lived chemicals from space might be a solution to this challenge. Here we investigate the application of data assimilation of NO­2 columns as will be observed from geostationary orbit to improve predictions and retrospective analysis of surface wind fields. Specifically, synthetic NO2 observations are sampled from a "nature run (NR)" regarded as the true atmosphere. Then NO2 observations are assimilated using EAKF methods into a "control run (CR)" which differs from the NR in the wind field. Wind errors are generated by introducing (1) errors in the initial conditions, (2) creating a model error by using two different formulations for the planetary boundary layer, (3) and by combining both of these effects. The assimilation reduces wind errors by up to 50%, indicating the prospects for future geostationary atmospheric composition measurements to improve weather forecasting are substantial. We also examine the assimilation sensitivity to the data assimilation window length. We find that due to the temporal heterogeneity of wind errors, the success of this application favors chemical observations of high frequency, such as those from geostationary platform. We also show the potential to improve soil moisture field by assimilating NO­2 columns.

  6. Temporal-difference prediction errors and Pavlovian fear conditioning: role of NMDA and opioid receptors.

    PubMed

    Cole, Sindy; McNally, Gavan P

    2007-10-01

    Three experiments studied temporal-difference (TD) prediction errors during Pavlovian fear conditioning. In Stage I, rats received conditioned stimulus A (CSA) paired with shock. In Stage II, they received pairings of CSA and CSB with shock that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB --> CSA, was followed by shock. The change in intratrial durations supported fear learning to CSB but reduced fear of CSA, revealing the operation of TD prediction errors. N-methyl- D-aspartate (NMDA) receptor antagonism prior to Stage III prevented learning, whereas opioid receptor antagonism selectively affected predictive learning. These findings support a role for TD prediction errors in fear conditioning. They suggest that NMDA receptors contribute to fear learning by acting on the product of predictive error, whereas opioid receptors contribute to predictive error. (PsycINFO Database Record (c) 2007 APA, all rights reserved).

  7. Optimal error analysis of the intraseasonal convection due to uncertainties of the sea surface temperature in a coupled model

    NASA Astrophysics Data System (ADS)

    Li, Xiaojing; Tang, Youmin; Yao, Zhixiong

    2017-04-01

    The predictability of the convection related to the Madden-Julian Oscillation (MJO) is studied using a coupled model CESM (Community Earth System Model) and the climatically relevant singular vector (CSV) approach. The CSV approach is an ensemble-based strategy to calculate the optimal initial error on climate scale. In this study, we focus on the optimal initial error of the sea surface temperature in Indian Ocean, where is the location of the MJO onset. Six MJO events are chosen from the 10 years model simulation output. The results show that the large values of the SVs are mainly located in the bay of Bengal and the south central IO (around (25°S, 90°E)), which is a meridional dipole-like pattern. The fast error growth of the CSVs have important impacts on the prediction of the convection related to the MJO. The initial perturbations with the SV pattern result in the deep convection damping more quickly in the east Pacific Ocean. Moreover, the sensitivity studies of the CSVs show that different initial fields do not affect the CSVs obviously, while the perturbation domain is a more responsive factor to the CSVs. The rapid growth of the CSVs is found to be related to the west bay of Bengal, where the wind stress starts to be perturbed due to the CSV initial error. These results contribute to the establishment of an ensemble prediction system, as well as the optimal observation network. In addition, the analysis of the error growth can provide us some enlightment about the relationship between SST and the intraseasonal convection related to the MJO.

  8. Optimizing velocities and transports for complex coastal regions and archipelagos

    NASA Astrophysics Data System (ADS)

    Haley, Patrick J.; Agarwal, Arpit; Lermusiaux, Pierre F. J.

    2015-05-01

    We derive and apply a methodology for the initialization of velocity and transport fields in complex multiply-connected regions with multiscale dynamics. The result is initial fields that are consistent with observations, complex geometry and dynamics, and that can simulate the evolution of ocean processes without large spurious initial transients. A class of constrained weighted least squares optimizations is defined to best fit first-guess velocities while satisfying the complex bathymetry, coastline and divergence strong constraints. A weak constraint towards the minimum inter-island transports that are in accord with the first-guess velocities provides important velocity corrections in complex archipelagos. In the optimization weights, the minimum distance and vertical area between pairs of coasts are computed using a Fast Marching Method. Additional information on velocity and transports are included as strong or weak constraints. We apply our methodology around the Hawaiian islands of Kauai/Niihau, in the Taiwan/Kuroshio region and in the Philippines Archipelago. Comparisons with other common initialization strategies, among hindcasts from these initial conditions (ICs), and with independent in situ observations show that our optimization corrects transports, satisfies boundary conditions and redirects currents. Differences between the hindcasts from these different ICs are found to grow for at least 2-3 weeks. When compared to independent in situ observations, simulations from our optimized ICs are shown to have the smallest errors.

  9. Linking models and data on vegetation structure

    NASA Astrophysics Data System (ADS)

    Hurtt, G. C.; Fisk, J.; Thomas, R. Q.; Dubayah, R.; Moorcroft, P. R.; Shugart, H. H.

    2010-06-01

    For more than a century, scientists have recognized the importance of vegetation structure in understanding forest dynamics. Now future satellite missions such as Deformation, Ecosystem Structure, and Dynamics of Ice (DESDynI) hold the potential to provide unprecedented global data on vegetation structure needed to reduce uncertainties in terrestrial carbon dynamics. Here, we briefly review the uses of data on vegetation structure in ecosystem models, develop and analyze theoretical models to quantify model-data requirements, and describe recent progress using a mechanistic modeling approach utilizing a formal scaling method and data on vegetation structure to improve model predictions. Generally, both limited sampling and coarse resolution averaging lead to model initialization error, which in turn is propagated in subsequent model prediction uncertainty and error. In cases with representative sampling, sufficient resolution, and linear dynamics, errors in initialization tend to compensate at larger spatial scales. However, with inadequate sampling, overly coarse resolution data or models, and nonlinear dynamics, errors in initialization lead to prediction error. A robust model-data framework will require both models and data on vegetation structure sufficient to resolve important environmental gradients and tree-level heterogeneity in forest structure globally.

  10. Influence of wheelchair front caster wheel on reverse directional stability.

    PubMed

    Guo, Songfeng; Cooper, Rory A; Corfman, Tom; Ding, Dan; Grindle, Garrett

    2003-01-01

    The purpose of this research was to study directional stability during reversing of rear-wheel drive, electric powered wheelchairs (EPW) under different initial front caster orientations. Specifically, the weight distribution differences caused by certain initial caster orientations were examined as a possible mechanism for causing directional instability that could lead to accidents. Directional stability was quantified by measuring the drive direction error of the EPW by a motion analysis system. The ground reaction forces were collected to determine the load on the front casters, as well as back-emf data to attain the speed of the motors. The drive direction error was found to be different for various initial caster orientations. Drive direction error was greatest when both casters were oriented 90 degrees to the left or right, and least when both casters were oriented forward. The results show that drive direction error corresponds to the loading difference on the casters. The data indicates that loading differences may cause asymmetric drag on the casters, which in turn causes unbalanced torque load on the motors. This leads to a difference in motor speed and drive direction error.

  11. Predicting 2D target velocity cannot help 2D motion integration for smooth pursuit initiation.

    PubMed

    Montagnini, Anna; Spering, Miriam; Masson, Guillaume S

    2006-12-01

    Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.

  12. In-flight alignment using H ∞ filter for strapdown INS on aircraft.

    PubMed

    Pei, Fu-Jun; Liu, Xuan; Zhu, Li

    2014-01-01

    In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition.

  13. Initiale Aktivität und Willkürverhalten bei Tieren

    NASA Astrophysics Data System (ADS)

    Heisenberg, Martin

    1983-02-01

    Initiation as a basic property of behavioral activity is functionally analyzed and discussed at the level of voluntary behavior. Fixed action patterns often are not released by stimuli but are generated by the animal itself through brain processes of the Darwinian type. Analogous to mutations, behavioral “subroutines” are brought up by chance and are subjected to selection either by the change in the situation (trial and the elimination of error) or by mental activity suppressing inappropriate behavior even before it is executed.Initiation improves the chance of survival. It is a prerequisite of goal-oriented behavior, an essential constituent of operant conditioning and presumably the first step in the evolution of thought. According to I. Kant a person is free if, by following his own directive, he does what has to be done. This definition meets the two central criteria of initiation: the independence of releasing stimuli and the adaptive value of the behavior generated.

  14. Indirect learning control for nonlinear dynamical systems

    NASA Technical Reports Server (NTRS)

    Ryu, Yeong Soon; Longman, Richard W.

    1993-01-01

    In a previous paper, learning control algorithms were developed based on adaptive control ideas for linear time variant systems. The learning control methods were shown to have certain advantages over their adaptive control counterparts, such as the ability to produce zero tracking error in time varying systems, and the ability to eliminate repetitive disturbances. In recent years, certain adaptive control algorithms have been developed for multi-body dynamic systems such as robots, with global guaranteed convergence to zero tracking error for the nonlinear system euations. In this paper we study the relationship between such adaptive control methods designed for this specific class of nonlinear systems, and the learning control problem for such systems, seeking to converge to zero tracking error in following a specific command repeatedly, starting from the same initial conditions each time. The extension of these methods from the adaptive control problem to the learning control problem is seen to be trivial. The advantages and disadvantages of using learning control based on such adaptive control concepts for nonlinear systems, and the use of other currently available learning control algorithms are discussed.

  15. Can tutoring improve performance on a reasoning task under deadline conditions?

    PubMed

    Osman, Magda

    2007-03-01

    The present study examined the effectiveness of a tutoring technique that has been used to identify and address participants' misunderstandings in Wason's selection task. In particular, the study investigated whether the technique would lead to improvements in performance when the task was presented in a deadline format (a condition in which time restrictions are imposed). In Experiment 1, the effects of tutoring on performance were compared in free time (conditions in which no time restrictions are imposed) and deadline task formats. In Experiment 2, improvements in performance were studied in deadline task formats, in which the tutoring and test phases were separated by an interval of 1 day. The results suggested that tutoring improved performance on the selection task under deadline and in free time conditions. Additionally, the study showed that participants made errors because they had misinterpreted the task. With tutoring, they were able to modify their initial misunderstandings.

  16. What is the cause of confidence inflation in the Life Events Inventory (LEI) paradigm?

    PubMed

    Von Glahn, Nicholas R; Otani, Hajime; Migita, Mai; Langford, Sara J; Hillard, Erin E

    2012-01-01

    Briefly imagining, paraphrasing, or explaining an event causes people to increase their confidence that this event occurred during childhood-the imagination inflation effect. The mechanisms responsible for the effect were investigated with a new paradigm. In Experiment 1, event familiarity (defined as processing fluency) was varied by asking participants to rate each event once, three times, or five times. No inflation was found, indicating that familiarity does not account for the effect. In Experiment 2, richness of memory representation was manipulated by asking participants to generate zero, three, or six details. Confidence increased from the initial to the final rating in the three- and six-detail conditions, indicating that the effect is based on reality-monitoring errors. However, greater inflation in the three-detail condition than in the six-detail condition indicated that there is a boundary condition. These results were also consistent with an alternative hypothesis, the mental workload hypothesis.

  17. Impact of Initial Condition Errors and Precipitation Forecast Bias on Drought Simulation and Prediction in the Huaihe River Basin

    NASA Astrophysics Data System (ADS)

    Xu, H.; Luo, L.; Wu, Z.

    2016-12-01

    Drought, regarded as one of the major disasters all over the world, is not always easy to detect and forecast. Hydrological models coupled with Numerical Weather Prediction (NWP) has become a relatively effective method for drought monitoring and prediction. The accuracy of hydrological initial condition (IC) and the skill of NWP precipitation forecast can both heavily affect the quality and skill of hydrological forecast. In the study, the Variable Infiltration Capacity (VIC) model and Global Environmental Multi-scale (GEM) model were used to investigate the roles of IC and NWP forecast accuracy on hydrological predictions. A rev-ESP type experiment was conducted for a number of drought events in the Huaihe river basin. The experiment suggests that errors in ICs indeed affect the drought simulations by VIC and thus the drought monitoring. Although errors introduced in the ICs diminish gradually, the influence sometimes can last beyond 12 months. Using the soil moisture anomaly percentage index (SMAPI) as the metric to measure drought severity for the study region, we are able to quantify that time scale of influence from IC ranges. The analysis shows that the time scale is directly related to the magnitude of the introduced IC range and the average precipitation intensity. In order to explore how systematic bias correction in GEM forecasted precipitation can affect precipitation and hydrological forecast, we then both used station and gridded observations to eliminate biases of forecasted data. Meanwhile, different precipitation inputs with corrected data during drought process were conducted by VIC to investigate the changes of drought simulations, thus demonstrated short-term rolling drought prediction using a better performed corrected precipitation forecast. There is a word limit on the length of the abstract. So make sure your abstract fits the requirement. If this version is too long, try to shorten it as much as you can.

  18. Post audit of a numerical prediction of wellfield drawdown in a semiconfined aquifer system

    USGS Publications Warehouse

    Stewart, M.; Langevin, C.

    1999-01-01

    A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1 x 105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent steady-state condition, and that slow declines in levels continue for years after the initiation of pumping. While the 1981 'impact' model can be used for reasonably predicting short-term, wellfield-scale effects of pumping, using a 75 day long simulation without recharge to predict the long-term behavior of the wellfield was an inappropriate application, resulting in significant underprediction of wellfield effects.A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1??105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent stead

  19. The effect of monetary punishment on error evaluation in a Go/No-go task.

    PubMed

    Maruo, Yuya; Sommer, Werner; Masaki, Hiroaki

    2017-10-01

    Little is known about the effects of the motivational significance of errors in Go/No-go tasks. We investigated the impact of monetary punishment on the error-related negativity (ERN) and error positivity (Pe) for both overt errors and partial errors, that is, no-go trials without overt responses but with covert muscle activities. We compared high and low punishment conditions where errors were penalized with 50 or 5 yen, respectively, and a control condition without monetary consequences for errors. Because we hypothesized that the partial-error ERN might overlap with the no-go N2, we compared ERPs between correct rejections (i.e., successful no-go trials) and partial errors in no-go trials. We also expected that Pe amplitudes should increase with the severity of the penalty for errors. Mean error rates were significantly lower in the high punishment than in the control condition. Monetary punishment did not influence the overt-error ERN and partial-error ERN in no-go trials. The ERN in no-go trials did not differ between partial errors and overt errors; in addition, ERPs for correct rejections in no-go trials without partial errors were of the same size as in go-trial. Therefore the overt-error ERN and the partial-error ERN may share similar error monitoring processes. Monetary punishment increased Pe amplitudes for overt errors, suggesting enhanced error evaluation processes. For partial errors an early Pe was observed, presumably representing inhibition processes. Interestingly, even partial errors elicited the Pe, suggesting that covert erroneous activities could be detected in Go/No-go tasks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Contingent negative variation (CNV) associated with sensorimotor timing error correction.

    PubMed

    Jang, Joonyong; Jones, Myles; Milne, Elizabeth; Wilson, Daniel; Lee, Kwang-Hyuk

    2016-02-15

    Detection and subsequent correction of sensorimotor timing errors are fundamental to adaptive behavior. Using scalp-recorded event-related potentials (ERPs), we sought to find ERP components that are predictive of error correction performance during rhythmic movements. Healthy right-handed participants were asked to synchronize their finger taps to a regular tone sequence (every 600 ms), while EEG data were continuously recorded. Data from 15 participants were analyzed. Occasional irregularities were built into stimulus presentation timing: 90 ms before (advances: negative shift) or after (delays: positive shift) the expected time point. A tapping condition alternated with a listening condition in which identical stimulus sequence was presented but participants did not tap. Behavioral error correction was observed immediately following a shift, with a degree of over-correction with positive shifts. Our stimulus-locked ERP data analysis revealed, 1) increased auditory N1 amplitude for the positive shift condition and decreased auditory N1 modulation for the negative shift condition; and 2) a second enhanced negativity (N2) in the tapping positive condition, compared with the tapping negative condition. In response-locked epochs, we observed a CNV (contingent negative variation)-like negativity with earlier latency in the tapping negative condition compared with the tapping positive condition. This CNV-like negativity peaked at around the onset of subsequent tapping, with the earlier the peak, the better the error correction performance with the negative shifts while the later the peak, the better the error correction performance with the positive shifts. This study showed that the CNV-like negativity was associated with the error correction performance during our sensorimotor synchronization study. Auditory N1 and N2 were differentially involved in negative vs. positive error correction. However, we did not find evidence for their involvement in behavioral error correction. Overall, our study provides the basis from which further research on the role of the CNV in perceptual and motor timing can be developed. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Sensitivity and spin-up times of cohesive sediment transport models used to simulate bathymetric change: Chapter 31

    USGS Publications Warehouse

    Schoellhamer, D.H.; Ganju, N.K.; Mineart, P.R.; Lionberger, M.A.; Kusuda, T.; Yamanishi, H.; Spearman, J.; Gailani, J. Z.

    2008-01-01

    Bathymetric change in tidal environments is modulated by watershed sediment yield, hydrodynamic processes, benthic composition, and anthropogenic activities. These multiple forcings combine to complicate simple prediction of bathymetric change; therefore, numerical models are necessary to simulate sediment transport. Errors arise from these simulations, due to inaccurate initial conditions and model parameters. We investigated the response of bathymetric change to initial conditions and model parameters with a simplified zero-dimensional cohesive sediment transport model, a two-dimensional hydrodynamic/sediment transport model, and a tidally averaged box model. The zero-dimensional model consists of a well-mixed control volume subjected to a semidiurnal tide, with a cohesive sediment bed. Typical cohesive sediment parameters were utilized for both the bed and suspended sediment. The model was run until equilibrium in terms of bathymetric change was reached, where equilibrium is defined as less than the rate of sea level rise in San Francisco Bay (2.17 mm/year). Using this state as the initial condition, model parameters were perturbed 10% to favor deposition, and the model was resumed. Perturbed parameters included, but were not limited to, maximum tidal current, erosion rate constant, and critical shear stress for erosion. Bathymetric change was most sensitive to maximum tidal current, with a 10% perturbation resulting in an additional 1.4 m of deposition over 10 years. Re-establishing equilibrium in this model required 14 years. The next most sensitive parameter was the critical shear stress for erosion; when increased 10%, an additional 0.56 m of sediment was deposited and 13 years were required to re-establish equilibrium. The two-dimensional hydrodynamic/sediment transport model was calibrated to suspended-sediment concentration, and despite robust solution of hydrodynamic conditions it was unable to accurately hindcast bathymetric change. The tidally averaged box model was calibrated to bathymetric change data and shows rapidly evolving bathymetry in the first 10-20 years, though sediment supply and hydrodynamic forcing did not vary greatly. This initial burst of bathymetric change is believed to be model adjustment to initial conditions, and suggests a spin-up time of greater than 10 years. These three diverse modeling approaches reinforce the sensitivity of cohesive sediment transport models to initial conditions and model parameters, and highlight the importance of appropriate calibration data. Adequate spin-up time of the order of years is required to initialize models, otherwise the solution will contain bathymetric change that is not due to environmental forcings, but rather improper specification of initial conditions and model parameters. Temporally intensive bathymetric change data can assist in determining initial conditions and parameters, provided they are available. Computational effort may be reduced by selectively updating hydrodynamics and bathymetry, thereby allowing time for spin-up periods. reserved.

  2. Conditional Standard Errors, Reliability and Decision Consistency of Performance Levels Using Polytomous IRT.

    ERIC Educational Resources Information Center

    Wang, Tianyou; And Others

    M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…

  3. A model of amygdala-hippocampal-prefrontal interaction in fear conditioning and extinction in animals

    PubMed Central

    Moustafa, Ahmed A.; Gilbertson, Mark W.; Orr, Scott P.; Herzallah, Mohammad M.; Servatius, Richard. J.; Myers, Catherine E.

    2012-01-01

    Empirical research has shown that the amygdala, hippocampus, and ventromedial prefrontal cortex (vmPFC) are involved in fear conditioning. However, the functional contribution of each brain area and the nature of their interactions are not clearly understood. Here, we extend existing neural network models of the functional roles of the hippocampus in classical conditioning to include interactions with the amygdala and prefrontal cortex. We apply the model to fear conditioning, in which animals learn physiological (e.g. heart rate) and behavioral (e.g. freezing) responses to stimuli that have been paired with a highly aversive event (e.g. electrical shock). The key feature of our model is that learning of these conditioned responses in the central nucleus of the amygdala is modulated by two separate processes, one from basolateral amygdala and signaling a positive prediction error, and one from the vmPFC, via the intercalated cells of the amygdala, and signaling a negative prediction error. In addition, we propose that hippocampal input to both vmPFC and basolateral amygdala is essential for contextual modulation of fear acquisition and extinction. The model is sufficient to account for a body of data from various animal fear conditioning paradigms, including acquisition, extinction, reacquisition, and context specificity effects. Consistent with studies on lesioned animals, our model shows that damage to the vmPFC impairs extinction, while damage to the hippocampus impairs extinction in a different context (e.g., a different conditioning chamber from that used in initial training in animal experiments). We also discuss model limitations and predictions, including the effects of number of training trials on fear conditioning. PMID:23164732

  4. Disclosure of Medical Errors: What Factors Influence How Patients Respond?

    PubMed Central

    Mazor, Kathleen M; Reed, George W; Yood, Robert A; Fischer, Melissa A; Baril, Joann; Gurwitz, Jerry H

    2006-01-01

    BACKGROUND Disclosure of medical errors is encouraged, but research on how patients respond to specific practices is limited. OBJECTIVE This study sought to determine whether full disclosure, an existing positive physician-patient relationship, an offer to waive associated costs, and the severity of the clinical outcome influenced patients' responses to medical errors. PARTICIPANTS Four hundred and seven health plan members participated in a randomized experiment in which they viewed video depictions of medical error and disclosure. DESIGN Subjects were randomly assigned to experimental condition. Conditions varied in type of medication error, level of disclosure, reference to a prior positive physician-patient relationship, an offer to waive costs, and clinical outcome. MEASURES Self-reported likelihood of changing physicians and of seeking legal advice; satisfaction, trust, and emotional response. RESULTS Nondisclosure increased the likelihood of changing physicians, and reduced satisfaction and trust in both error conditions. Nondisclosure increased the likelihood of seeking legal advice and was associated with a more negative emotional response in the missed allergy error condition, but did not have a statistically significant impact on seeking legal advice or emotional response in the monitoring error condition. Neither the existence of a positive relationship nor an offer to waive costs had a statistically significant impact. CONCLUSIONS This study provides evidence that full disclosure is likely to have a positive effect or no effect on how patients respond to medical errors. The clinical outcome also influences patients' responses. The impact of an existing positive physician-patient relationship, or of waiving costs associated with the error remains uncertain. PMID:16808770

  5. Visuomotor adaptation needs a validation of prediction error by feedback error

    PubMed Central

    Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle

    2014-01-01

    The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644

  6. Predicatbility of windstorm Klaus; sensitivity to PV perturbations

    NASA Astrophysics Data System (ADS)

    Arbogast, P.; Maynard, K.

    2010-09-01

    It appears that some short-range weather forecast failures may be attributed to initial conditions errors. In some cases it is possible to anticipate the behavior of the model by comparison between observations and model analyses. In the case of extratropical cyclone development one may qualify the representation of the upper-level precursors described in terms of PV in the initial conditions by comparison with either satellite ozone or water-vapor. A step forward has been made in developing a tool based upon manual modifications of dynamical tropopause (i.e. height of 1.5 PV units) and PV inversion. After five years of experimentations it turns out that the forecasters eventually succeed in improving the forecast of some strong cyclone development. However the present approach is subjective per se. To measure the subjectivity of the procedure a set of 15 experiments has been performed provided by 7 different people (senior forecasters and scientists involved in dynamical meteorology) in order to improve an initial state of the global model ARPEGE leading to a poor forecast of the wind storm Klaus (24 January 2009). This experiment reveals that the manually defined corrections present common features but also a large spread.

  7. An AI-based approach to structural damage identification by modal analysis

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Hanagud, S.

    1990-01-01

    Flexible-structure damage is presently addressed by a combined model- and parameter-identification approach which employs the AI methodologies of classification, heuristic search, and object-oriented model knowledge representation. The conditions for model-space search convergence to the best model are discussed in terms of search-tree organization and initial model parameter error. In the illustrative example of a truss structure presented, the use of both model and parameter identification is shown to lead to smaller parameter corrections than would be required by parameter identification alone.

  8. Multisynchronization of chaotic oscillators via nonlinear observer approach.

    PubMed

    Aguilar-López, Ricardo; Martínez-Guerra, Rafael; Mata-Machuca, Juan L

    2014-01-01

    The goal of this work is to synchronize a class of chaotic oscillators in a master-slave scheme, under different initial conditions, considering several slaves systems. The Chen oscillator is employed as a benchmark model and a nonlinear observer is proposed to reach synchronicity between the master and the slaves' oscillators. The proposed observer contains a proportional and integral form of a bounded function of the synchronization error in order to provide asymptotic synchronization with a satisfactory performance. Numerical experiments were carried out to show the operation of the considered methodology.

  9. Multisynchronization of Chaotic Oscillators via Nonlinear Observer Approach

    PubMed Central

    Aguilar-López, Ricardo; Martínez-Guerra, Rafael; Mata-Machuca, Juan L.

    2014-01-01

    The goal of this work is to synchronize a class of chaotic oscillators in a master-slave scheme, under different initial conditions, considering several slaves systems. The Chen oscillator is employed as a benchmark model and a nonlinear observer is proposed to reach synchronicity between the master and the slaves' oscillators. The proposed observer contains a proportional and integral form of a bounded function of the synchronization error in order to provide asymptotic synchronization with a satisfactory performance. Numerical experiments were carried out to show the operation of the considered methodology. PMID:24578671

  10. Earth to Moon Transfers - Direct vs Via Libration Points (L1, L2)

    NASA Technical Reports Server (NTRS)

    Condon, Gerald L.; Wilson, Sam

    2002-01-01

    Recommend Direct Remote Ocean Area impact disposal for caseswithout hazardous (e.g., radioactive) material on LTV kickstage Controlled Earth contact. Relatively small disposal AV. Avoids close encounter with Moon. Trajectories can be very sensitive to initial conditions (at disposalmaneuver).V to correct for errors is small. Recommend Heliocentric Orbit disposal for cases with hazardousmaterial on LTV kickstage. No Earth or Lunar disposal issues (e.g.. impact location, debris footprint,litter). Relatively low disposal AV cost. Further study required to determine possibility of re-contact with Earth.

  11. Leader–follower fixed-time consensus of multi-agent systems with high-order integrator dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Bailing; Zuo, Zongyu; Wang, Hong

    The leader-follower fixed-time consensus of high-order multi-agent systems with external disturbances is investigated in this paper. A novel sliding manifold is designed to ensure that the tracking errors converge to zero in a fixed-time during the sliding motion. Then, a distributed control law is designed based on Lyapunov technique to drive the system states to the sliding manifold in finite-time independent of initial conditions. Finally, the efficiency of the proposed method is illustrated by numerical simulations.

  12. Comment on 'Shang S. 2012. Calculating actual crop evapotranspiration under soil water stress conditions with appropriate numerical methods and time step. Hydrological Processes 26: 3338-3343. DOI: 10.1002/hyp.8405'

    NASA Technical Reports Server (NTRS)

    Yatheendradas, Soni; Narapusetty, Balachandrudu; Peters-Lidard, Christa; Funk, Christopher; Verdin, James

    2014-01-01

    A previous study analyzed errors in the numerical calculation of actual crop evapotranspiration (ET(sub a)) under soil water stress. Assuming no irrigation or precipitation, it constructed equations for ET(sub a) over limited soil-water ranges in a root zone drying out due to evapotranspiration. It then used a single crop-soil composite to provide recommendations about the appropriate usage of numerical methods under different values of the time step and the maximum crop evapotranspiration (ET(sub c)). This comment reformulates those ET(sub a) equations for applicability over the full range of soil water values, revealing a dependence of the relative error in numerical ET(sub a) on the initial soil water that was not seen in the previous study. It is shown that the recommendations based on a single crop-soil composite can be invalid for other crop-soil composites. Finally, a consideration of the numerical error in the time-cumulative value of ET(sub a) is discussed besides the existing consideration of that error over individual time steps as done in the previous study. This cumulative ET(sub a) is more relevant to the final crop yield.

  13. A fatal outcome after unintentional overdosing of rivastigmine patches.

    PubMed

    Lövborg, Henrik; Jönsson, Anna K; Hägg, Staffan

    2012-02-01

    Rivastigmine is an acetylcholine esterase inhibitor used in the treatment of dementia. Patches with rivastigmine for transdermal delivery have been used to increase compliance and to reduce side effects. We describe an 87-year old male with dementia treated with multiple rivastigmine patches (Exelon 9,5 mg/24 h) who developed nausea, vomiting and renal failure with disturbed electrolytes resulting in death. The symptoms occurred after six rivastigmine patches had concomitantly been erroneously applied by health care personnel on two consecutive days. The terminal cause of death was considered to be uremia from an acute tubular necrosis that was assessed as a result of dehydration through vomiting. The rivastigmine intoxication was assessed as having caused or contributed to the dehydrated condition. The medication error occurred at least partly due to ambiguous labeling. The clinical signs were not initially recognized as adverse effects of rivastigmine. The presented case is a description of a rivastigmine overdose due to a medication error involving patches. This case indicates the importance of clear and unambiguous instructions to avoid administration errors with patches and to be vigilant to adverse drug reactions for early detection and correction of drug administration errors. In particular, instructions clearly indicating that only one patch should be applied at a time are important.

  14. Post-processing through linear regression

    NASA Astrophysics Data System (ADS)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  15. Dwell time method based on Richardson-Lucy algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Ma, Zhen

    2017-10-01

    When the noise in the surface error data given by the interferometer has no effect on the iterative convergence of the RL algorithm, the RL algorithm for deconvolution in image restoration can be applied to the CCOS model to solve the dwell time. By extending the initial error function on the edge and denoising the noise in the surface error data given by the interferometer , it makes the result more available . The simulation results show the final residual error 10.7912nm nm in PV and 0.4305 nm in RMS, when the initial surface error is 107.2414 nm in PV and 15.1331 nm in RMS. The convergence rates of the PV and RMS values can reach up to 89.9% and 96.0%, respectively . The algorithms can satisfy the requirement of fabrication very well.

  16. Space charge enhanced plasma gradient effects on satellite electric field measurements

    NASA Technical Reports Server (NTRS)

    Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.

    1991-01-01

    It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.

  17. Overview of Initial NSTX-U Experimental Operations

    NASA Astrophysics Data System (ADS)

    Battaglia, Devon; the NSTX-U Team

    2016-10-01

    Initial operation of the National Spherical Torus Experiment Upgrade (NSTX-U) has satisfied a number of commissioning milestones, including demonstration of discharges that exceed the field and pulse length of NSTX. ELMy H-mode operation at the no-wall βN limit is obtained with Boronized wall conditioning. Peak H-mode parameters include: Ip = 1 MA, BT0 = 0.63 T, WMHD = 330 kJ, βN = 4, βN/li = 6, κ = 2.3, τE , tot >50 ms. Access to high-performance H-mode scenarios with long MHD-quiescent periods is enabled by the resilient timing of the L-H transition via feedback control of the diverting time and shape, and correction of the dominant n =1 error fields during the Ip ramp. Stationary L-mode discharges have been realized up to 1 MA with 2 s discharges achieved at Ip = 650 kA. The long-pulse L-mode discharges enabled by the new central solenoid supported initial experiments on error field measurements and correction, plasma shape control, controlled discharge ramp-down, L-mode transport and fast ion physics. Increased off-axis current drive and reduction of fast ion instabilities has been observed with the new, more tangential neutral beamline. The initial results support that access to increased field, current and heating at low-aspect-ratio expands the regimes available to develop scenarios, diagnostics and predictive models that inform the design and optimization of future burning plasma tokamak devices, including ITER. Work Supported by U.S. DOE Contract No. DE-AC02-09CH11466.

  18. The Relationship Between Eyewitness Confidence and Identification Accuracy: A New Synthesis.

    PubMed

    Wixted, John T; Wells, Gary L

    2017-05-01

    The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).

  19. A new method for determining the optimal lagged ensemble

    PubMed Central

    DelSole, T.; Tippett, M. K.; Pegion, K.

    2017-01-01

    Abstract We propose a general methodology for determining the lagged ensemble that minimizes the mean square forecast error. The MSE of a lagged ensemble is shown to depend only on a quantity called the cross‐lead error covariance matrix, which can be estimated from a short hindcast data set and parameterized in terms of analytic functions of time. The resulting parameterization allows the skill of forecasts to be evaluated for an arbitrary ensemble size and initialization frequency. Remarkably, the parameterization also can estimate the MSE of a burst ensemble simply by taking the limit of an infinitely small interval between initialization times. This methodology is applied to forecasts of the Madden Julian Oscillation (MJO) from version 2 of the Climate Forecast System version 2 (CFSv2). For leads greater than a week, little improvement is found in the MJO forecast skill when ensembles larger than 5 days are used or initializations greater than 4 times per day. We find that if the initialization frequency is too infrequent, important structures of the lagged error covariance matrix are lost. Lastly, we demonstrate that the forecast error at leads ≥10 days can be reduced by optimally weighting the lagged ensemble members. The weights are shown to depend only on the cross‐lead error covariance matrix. While the methodology developed here is applied to CFSv2, the technique can be easily adapted to other forecast systems. PMID:28580050

  20. Retrieval Failure Contributes to Gist-Based False Recognition

    PubMed Central

    Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.

    2011-01-01

    People often falsely recognize items that are similar to previously encountered items. This robust memory error is referred to as gist-based false recognition. A widely held view is that this error occurs because the details fade rapidly from our memory. Contrary to this view, an initial experiment revealed that, following the same encoding conditions that produce high rates of gist-based false recognition, participants overwhelmingly chose the correct target rather than its related foil when given the option to do so. A second experiment showed that this result is due to increased access to stored details provided by reinstatement of the originally encoded photograph, rather than to increased attention to the details. Collectively, these results suggest that details needed for accurate recognition are, to a large extent, still stored in memory and that a critical factor determining whether false recognition will occur is whether these details can be accessed during retrieval. PMID:22125357

  1. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE PAGES

    Steyer, Andrew J.; Van Vleck, Erik S.

    2018-04-13

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  2. A Computational Intelligence (CI) Approach to the Precision Mars Lander Problem

    NASA Technical Reports Server (NTRS)

    Birge, Brian; Walberg, Gerald

    2002-01-01

    A Mars precision landing requires a landed footprint of no more than 100 meters. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions such as entry angle, parachute deployment height, environment parameters such as wind, atmospheric density, parachute deployment dynamics, unavoidable injection error or propagated error from launch, etc. Computational Intelligence (CI) techniques such as Artificial Neural Nets and Particle Swarm Optimization have been shown to have great success with other control problems. The research period extended previous work on investigating applicability of the computational intelligent approaches. The focus of this investigation was on Particle Swarm Optimization and basic Neural Net architectures. The research investigating these issues was performed for the grant cycle from 5/15/01 to 5/15/02. Matlab 5.1 and 6.0 along with NASA's POST were the primary computational tools.

  3. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steyer, Andrew J.; Van Vleck, Erik S.

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  4. Compensation in the presence of deep turbulence using tiled-aperture architectures

    NASA Astrophysics Data System (ADS)

    Spencer, Mark F.; Brennan, Terry J.

    2017-05-01

    The presence of distributed-volume atmospheric aberrations or "deep turbulence" presents unique challenges for beam-control applications which look to sense and correct for disturbances found along the laser-propagation path. This paper explores the potential for branch-point-tolerant reconstruction algorithms and tiled-aperture architectures to correct for the branch cuts contained in the phase function due to deep-turbulence conditions. Using wave-optics simulations, the analysis aims to parameterize the fitting-error performance of tiled-aperture architectures operating in a null-seeking control loop with piston, tip, and tilt compensation of the individual optical beamlet trains. To evaluate fitting-error performance, the analysis plots normalized power in the bucket as a function of the Fried coherence diameter, the log-amplitude variance, and the number of subapertures for comparison purposes. Initial results show that tiled-aperture architectures with a large number of subapertures outperform filled-aperture architectures with continuous-face-sheet deformable mirrors.

  5. Fuzzy Adaptive Output Feedback Control of Uncertain Nonlinear Systems With Prescribed Performance.

    PubMed

    Zhang, Jin-Xi; Yang, Guang-Hong

    2018-05-01

    This paper investigates the tracking control problem for a family of strict-feedback systems in the presence of unknown nonlinearities and immeasurable system states. A low-complexity adaptive fuzzy output feedback control scheme is proposed, based on a backstepping method. In the control design, a fuzzy adaptive state observer is first employed to estimate the unmeasured states. Then, a novel error transformation approach together with a new modification mechanism is introduced to guarantee the finite-time convergence of the output error to a predefined region and ensure the closed-loop stability. Compared with the existing methods, the main advantages of our approach are that: 1) without using extra command filters or auxiliary dynamic surface control techniques, the problem of explosion of complexity can still be addressed and 2) the design procedures are independent of the initial conditions. Finally, two practical examples are performed to further illustrate the above theoretic findings.

  6. Interactions between moist heating and dynamics in atmospheric predictability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straus, D.M.; Huntley, M.A.

    1994-02-01

    The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less

  7. Radiative flux and forcing parameterization error in aerosol-free clear skies

    DOE PAGES

    Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...

    2015-07-03

    This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less

  8. Complex contexts and relationships affect clinical decisions in group therapy.

    PubMed

    Tasca, Giorgio A; Mcquaid, Nancy; Balfour, Louise

    2016-09-01

    Clinical errors tend to be underreported even though examining them can provide important training and professional development opportunities. The group therapy context may be prone to clinician errors because of the added complexity within which therapists work and patients receive treatment. We discuss clinical errors that occurred within a group therapy in which a patient for whom group was not appropriate was admitted to the treatment and then was not removed by the clinicians. This was countertherapeutic for both patient and group. Two clinicians were involved: a clinical supervisor who initially assessed and admitted the patient to the group, and a group therapist. To complicate matters, the group therapy occurred within the context of a clinical research trial. The errors, possible solutions, and recommendations are discussed within Reason's Organizational Accident Model (Reason, 2000). In particular, we discuss clinician errors in the context of countertransference and clinician heuristics, group therapy as a local work condition that complicates clinical decision-making, and the impact of the research context as a latent organizational factor. We also present clinical vignettes from the pregroup preparation, group therapy, and supervision. Group therapists are more likely to avoid errors in clinical decisions if they engage in reflective practice about their internal experiences and about the impact of the context in which they work. Therapists must keep in mind the various levels of group functioning, especially related to the group-as-a-whole (i.e., group composition, cohesion, group climate, and safety) when making complex clinical decisions in order to optimize patient outcomes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. A hybrid method for synthetic aperture ladar phase-error compensation

    NASA Astrophysics Data System (ADS)

    Hua, Zhili; Li, Hongping; Gu, Yongjian

    2009-07-01

    As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.

  10. Self-reported and observed punitive parenting prospectively predicts increased error-related brain activity in six-year-old children

    PubMed Central

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J.; Kujawa, Autumn J.; Laptook, Rebecca S.; Torpey, Dana C.; Klein, Daniel N.

    2017-01-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission—although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children’s ERN approximately three years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately three years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children’s error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this hypothesis. PMID:25092483

  11. [Modeling evapotranspiration of greenhouse tomato under different water conditions based on the dual crop coefficient method].

    PubMed

    Gong, Xue Wen; Liu, Hao; Sun, Jing Sheng; Ma, Xiao Jian; Wang, Wan Ning; Cui, Yong Sheng

    2017-04-18

    An experiment was conducted to investigate soil evaporation (E), crop transpiration (T), evapotranspiration (ET) and the ratio of evaporation to evapotranspiration (E/ET) of drip-irrigated tomato, which was planted in a typical solar greenhouse in the North China, under different water conditions [irrigation amount was determined based on accumulated pan evaporation (E p ) of 20 cm pan evaporation, and two treatments were designed with full irrigation (0.9E p ) and deficit irrigation (0.5E p )] at different growth stages in 2015 and 2016 at Xinxiang Comprehensive Experimental Station, Chinese Academy of Agricultural Sciences. Effects of deficit irrigation on crop coefficient (K c ) and variation of water stress coefficient (K s ) throughout the growing season were also discussed. E, T and ET of tomato were calculated with a dual crop coefficient approach, and compared with the measured data. Results indicated that E in the full irrigation was 21.5% and 20.4% higher than that in the deficit irrigation in 2015 and 2016, respectively, accounting for 24.0% and 25.0% of ET in the whole growing season. The maximum E/ET was measured in the initial stage of tomato, while the minimum obtained in the middle stage. The K c the full irrigation was 0.45, 0.89, 1.06 and 0.93 in the initial, development, middle, and late stage of tomato, and 0.45, 0.89, 0.87 and 0.41 the deficit irrigation. The K s the deficit irrigation was 0.98, 0.93, 0.78 and 0.39 in the initial, development, middle, and late stage, respectively. The dual crop coefficient method could accurately estimate ET of greenhouse tomato under different water conditions in 2015 and 2016 seasons with the mean absolute error (MAE) of 0.36-0.48 mm·d -1 , root mean square error (RMSE) of 0.44-0.65 mm·d -1 . The method also estimated E and T accurately with MAE of 0.15-0.19 and 0.26-0.56 mm·d -1 , and with RMSE of 0.20-0.24 and 0.33-0.72 mm·d -1 , respectively.

  12. Robust fixed-time synchronization of delayed Cohen-Grossberg neural networks.

    PubMed

    Wan, Ying; Cao, Jinde; Wen, Guanghui; Yu, Wenwu

    2016-01-01

    The fixed-time master-slave synchronization of Cohen-Grossberg neural networks with parameter uncertainties and time-varying delays is investigated. Compared with finite-time synchronization where the convergence time relies on the initial synchronization errors, the settling time of fixed-time synchronization can be adjusted to desired values regardless of initial conditions. Novel synchronization control strategy for the slave neural network is proposed. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, some sufficient schemes are provided for selecting the control parameters to ensure synchronization with required convergence time and in the presence of parameter uncertainties. Corresponding criteria for tuning control inputs are also derived for the finite-time synchronization. Finally, two numerical examples are given to illustrate the validity of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Approximate solution of space and time fractional higher order phase field equation

    NASA Astrophysics Data System (ADS)

    Shamseldeen, S.

    2018-03-01

    This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.

  14. Performance consequences of alternating directional control-response compatibility: evidence from a coal mine shuttle car simulator.

    PubMed

    Zupanc, Christine M; Burgess-Limerick, Robin J; Wallis, Guy

    2007-08-01

    To investigate error and reaction time consequences of alternating compatible and incompatible steering arrangements during a simulated obstacle avoidance task. Underground coal mine shuttle cars provide an example of a vehicle in which operators are required to alternate between compatible and incompatible steering configurations. This experiment examines the performance of 48 novice participants in a virtual analogy of an underground coal mine shuttle car. Participants were randomly assigned to a compatible condition, an incompatible condition, an alternating condition in which compatibility alternated within and between hands, or an alternating condition in which compatibility alternated between hands. Participants made fewer steering direction errors and made correct steering responses more quickly in the compatible condition. Error rate decreased over time in the incompatible condition. A compatibility effect for both errors and reaction time was also found when the control-response relationship alternated; however, performance improvements over time were not consistent. Isolating compatibility to a hand resulted in reduced error rate and faster reaction time than when compatibility alternated within and between hands. The consequences of alternating control-response relationships are higher error rates and slower responses, at least in the early stages of learning. This research highlights the importance of ensuring consistently compatible human-machine directional control-response relationships.

  15. Pilot age and error in air taxi crashes.

    PubMed

    Rebok, George W; Qiang, Yandong; Baker, Susan P; Li, Guohua

    2009-07-01

    The associations of pilot error with the type of flight operations and basic weather conditions are well documented. The correlation between pilot characteristics and error is less clear. This study aims to examine whether pilot age is associated with the prevalence and patterns of pilot error in air taxi crashes. Investigation reports from the National Transportation Safety Board for crashes involving non-scheduled Part 135 operations (i.e., air taxis) in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Crash circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Of the 1751 air taxi crashes studied, 28% resulted from mechanical failure, 25% from loss of control at landing or takeoff, 7% from visual flight rule conditions into instrument meteorological conditions, 7% from fuel starvation, 5% from taxiing, and 28% from other causes. Crashes among older pilots were more likely to occur during the daytime rather than at night and off airport than on airport. The patterns of pilot error in air taxi crashes were similar across age groups. Of the errors identified, 27% were flawed decisions, 26% were inattentiveness, 23% mishandled aircraft kinetics, 15% mishandled wind and/or runway conditions, and 11% were others. Pilot age is associated with crash circumstances but not with the prevalence and patterns of pilot error in air taxi crashes. Lack of age-related differences in pilot error may be attributable to the "safe worker effect."

  16. How personal standards perfectionism and evaluative concerns perfectionism affect the error positivity and post-error behavior with varying stimulus visibility.

    PubMed

    Drizinsky, Jessica; Zülch, Joachim; Gibbons, Henning; Stahl, Jutta

    2016-10-01

    Error detection is required in order to correct or avoid imperfect behavior. Although error detection is beneficial for some people, for others it might be disturbing. We investigated Gaudreau and Thompson's (Personality and Individual Differences, 48, 532-537, 2010) model, which combines personal standards perfectionism (PSP) and evaluative concerns perfectionism (ECP). In our electrophysiological study, 43 participants performed a combination of a modified Simon task, an error awareness paradigm, and a masking task with a variation of stimulus onset asynchrony (SOA; 33, 67, and 100 ms). Interestingly, relative to low-ECP participants, high-ECP participants showed a better post-error accuracy (despite a worse classification accuracy) in the high-visibility SOA 100 condition than in the two low-visibility conditions (SOA 33 and SOA 67). Regarding the electrophysiological results, first, we found a positive correlation between ECP and the amplitude of the error positivity (Pe) under conditions of low stimulus visibility. Second, under the condition of high stimulus visibility, we observed a higher Pe amplitude for high-ECP-low-PSP participants than for high-ECP-high-PSP participants. These findings are discussed within the framework of the error-processing avoidance hypothesis of perfectionism (Stahl, Acharki, Kresimon, Völler, & Gibbons, International Journal of Psychophysiology, 97, 153-162, 2015).

  17. Do errors matter? Errorless and errorful learning in anomic picture naming.

    PubMed

    McKissock, Stephen; Ward, Jamie

    2007-06-01

    Errorless training methods significantly improve learning in memory-impaired patients relative to errorful training procedures. However, the validity of this technique for acquiring linguistic information in aphasia has rarely been studied. This study contrasts three different treatment conditions over an 8 week period for rehabilitating picture naming in anomia: (1) errorless learning in which pictures are shown and the experimenter provides the name, (2) errorful learning with feedback in which the patient is required to generate a name but the correct name is then supplied by the experimenter, and (3) errorful learning in which no feedback is given. These conditions are compared to an untreated set of matched words. Both errorless and errorful learning with feedback conditions led to significant improvement at a 2-week and 12-14-week retest (errorful without feedback and untreated words were similar). The results suggest that it does not matter whether anomic patients are allowed to make errors in picture naming or not (unlike in memory impaired individuals). What does matter is that a correct response is given as feedback. The results also question the widely held assumption that it is beneficial for a patient to attempt to retrieve a word, given that our errorless condition involved no retrieval effort and had the greatest benefits.

  18. Induced mood and selective attention.

    PubMed

    Brand, N; Verspui, L; Oving, A

    1997-04-01

    Subjects (N = 60) were randomly assigned to an elated, depressed, or neutral mood-induction condition to assess the effect of mood state on cognitive functioning. In the elated condition film fragments expressing happiness and euphoria were shown. In the depressed condition some frightening and distressing film fragments were presented. The neutral group watched no film. Mood states were measured using the Profile of Mood States, and a Stroop task assessed selective attention. Both were presented by computer. The induction groups differed significantly in the expected direction on the mood subscales Anger, Tension, Depression, Vigour, and Fatigue, and also in the mean scale response times, i.e., slower responses for the depressed condition and faster for the elated one. Differences between conditions were found in the errors on the Stroop: in the depressed condition were the fewest errors and significantly longer error reaction times. Speed of error was associated with self-reported fatigue.

  19. A Quality Improvement Project to Decrease Human Milk Errors in the NICU.

    PubMed

    Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E

    2017-02-01

    Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.

  20. The NRL relocatable ocean/acoustic ensemble forecast system

    NASA Astrophysics Data System (ADS)

    Rowley, C.; Martin, P.; Cummings, J.; Jacobs, G.; Coelho, E.; Bishop, C.; Hong, X.; Peggion, G.; Fabre, J.

    2009-04-01

    A globally relocatable regional ocean nowcast/forecast system has been developed to support rapid implementation of new regional forecast domains. The system is in operational use at the Naval Oceanographic Office for a growing number of regional and coastal implementations. The new system is the basis for an ocean acoustic ensemble forecast and adaptive sampling capability. We present an overview of the forecast system and the ocean ensemble and adaptive sampling methods. The forecast system consists of core ocean data analysis and forecast modules, software for domain configuration, surface and boundary condition forcing processing, and job control, and global databases for ocean climatology, bathymetry, tides, and river locations and transports. The analysis component is the Navy Coupled Ocean Data Assimilation (NCODA) system, a 3D multivariate optimum interpolation system that produces simultaneous analyses of temperature, salinity, geopotential, and vector velocity using remotely-sensed SST, SSH, and sea ice concentration, plus in situ observations of temperature, salinity, and currents from ships, buoys, XBTs, CTDs, profiling floats, and autonomous gliders. The forecast component is the Navy Coastal Ocean Model (NCOM). The system supports one-way nesting and multiple assimilation methods. The ensemble system uses the ensemble transform technique with error variance estimates from the NCODA analysis to represent initial condition error. Perturbed surface forcing or an atmospheric ensemble is used to represent errors in surface forcing. The ensemble transform Kalman filter is used to assess the impact of adaptive observations on future analysis and forecast uncertainty for both ocean and acoustic properties.

  1. Specificity of reliable change models and review of the within-subjects standard deviation as an error term.

    PubMed

    Hinton-Bayre, Anton D

    2011-02-01

    There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.

  2. Design of analytical failure detection using secondary observers

    NASA Technical Reports Server (NTRS)

    Sisar, M.

    1982-01-01

    The problem of designing analytical failure-detection systems (FDS) for sensors and actuators, using observers, is addressed. The use of observers in FDS is related to the examination of the n-dimensional observer error vector which carries the necessary information on possible failures. The problem is that in practical systems, in which only some of the components of the state vector are measured, one has access only to the m-dimensional observer-output error vector, with m or = to n. In order to cope with these cases, a secondary observer is synthesized to reconstruct the entire observer-error vector from the observer output error vector. This approach leads toward the design of highly sensitive and reliable FDS, with the possibility of obtaining a unique fingerprint for every possible failure. In order to keep the observer's (or Kalman filter) false-alarm rate under a certain specified value, it is necessary to have an acceptable matching between the observer (or Kalman filter) models and the system parameters. A previously developed adaptive observer algorithm is used to maintain the desired system-observer model matching, despite initial mismatching or system parameter variations. Conditions for convergence for the adaptive process are obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors, while accurate and fast parameter identification, in both deterministic and stochastic cases, is obtained.

  3. Development of interactions between sensorimotor representations in school-aged children

    PubMed Central

    KAGERER, Florian A.; CLARK, Jane E.

    2014-01-01

    Reliable sensory-motor integration is a pre-requisite for optimal movement control; the functionality of this integration changes during development. Previous research has shown that motor performance of school-age children is characterized by higher variability, particularly under conditions where vision is not available, and movement planning and control is largely based on kinesthetic input. The purpose of the current study was to determine the characteristics of how kinesthetic-motor internal representations interact with visuo-motor representations during development. To this end, we induced a visuo-motor adaptation in 59 children, ranging from 5 to 12 years of age, as well as in a group of adults, and measured initial directional error (IDE) and endpoint error (EPE) during a subsequent condition where visual feedback was not available, and participants had to rely on kinesthetic input. Our results show that older children (age range 9–12 years) de-adapted significantly more than younger children (age range 5–8 years) over the course of 36 trials in the absence of vision, suggesting that the kinesthetic-motor internal representation in the older children was utilized more efficiently to guide hand movements, and was comparable to the performance of the adults. PMID:24636697

  4. Effects of model error on control of large flexible space antenna with comparisons of decoupled and linear quadratic regulator control procedures

    NASA Technical Reports Server (NTRS)

    Hamer, H. A.; Johnson, K. G.

    1986-01-01

    An analysis was performed to determine the effects of model error on the control of a large flexible space antenna. Control was achieved by employing two three-axis control-moment gyros (CMG's) located on the antenna column. State variables were estimated by including an observer in the control loop that used attitude and attitude-rate sensors on the column. Errors were assumed to exist in the individual model parameters: modal frequency, modal damping, mode slope (control-influence coefficients), and moment of inertia. Their effects on control-system performance were analyzed either for (1) nulling initial disturbances in the rigid-body modes, or (2) nulling initial disturbances in the first three flexible modes. The study includes the effects on stability, time to null, and control requirements (defined as maximum torque and total momentum), as well as on the accuracy of obtaining initial estimates of the disturbances. The effects on the transients of the undisturbed modes are also included. The results, which are compared for decoupled and linear quadratic regulator (LQR) control procedures, are shown in tabular form, parametric plots, and as sample time histories of modal-amplitude and control responses. Results of the analysis showed that the effects of model errors on the control-system performance were generally comparable for both control procedures. The effect of mode-slope error was the most serious of all model errors.

  5. Prediction of final error level in learning and repetitive control

    NASA Astrophysics Data System (ADS)

    Levoci, Peter A.

    Repetitive control (RC) is a field that creates controllers to eliminate the effects of periodic disturbances on a feedback control system. The methods have applications in spacecraft problems, to isolate fine pointing equipment from periodic vibration disturbances such as slight imbalances in momentum wheels or cryogenic pumps. A closely related field of control design is iterative learning control (ILC) which aims to eliminate tracking error in a task that repeats, each time starting from the same initial condition. Experiments done on a robot at NASA Langley Research Center showed that the final error levels produced by different candidate repetitive and learning controllers can be very different, even when each controller is analytically proven to converge to zero error in the deterministic case. Real world plant and measurement noise and quantization noise (from analog to digital and digital to analog converters) in these control methods are acted on as if they were error sources that will repeat and should be cancelled, which implies that the algorithms amplify such errors. Methods are developed that predict the final error levels of general first order ILC, of higher order ILC including current cycle learning, and of general RC, in the presence of noise, using frequency response methods. The method involves much less computation than the corresponding time domain approach that involves large matrices. The time domain approach was previously developed for ILC and handles a certain class of ILC methods. Here methods are created to include zero-phase filtering that is very important in creating practical designs. Also, time domain methods are developed for higher order ILC and for repetitive control. Since RC and ILC must be implemented digitally, all of these methods predict final error levels at the sample times. It is shown here that RC can easily converge to small error levels between sample times, but that ILC in most applications will have large and diverging intersample error if in fact zero error is reached at the sample times. This is independent of the ILC law used, and is purely a property of the physical system. Methods are developed to address this issue.

  6. Stratospheric wind errors, initial states and forecast skill in the GLAS general circulation model

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J.

    1983-01-01

    Relations between stratospheric wind errors, initial states and 500 mb skill are investigated using the GLAS general circulation model initialized with FGGE data. Erroneous stratospheric winds are seen in all current general circulation models, appearing also as weak shear above the subtropical jet and as cold polar stratospheres. In this study it is shown that the more anticyclonic large-scale flows are correlated with large forecast stratospheric winds. In addition, it is found that for North America the resulting errors are correlated with initial state jet stream accelerations while for East Asia the forecast winds are correlated with initial state jet strength. Using 500 mb skill scores over Europe at day 5 to measure forecast performance, it is found that both poor forecast skill and excessive stratospheric winds are correlated with more anticyclonic large-scale flows over North America. It is hypothesized that the resulting erroneous kinetic energy contributes to the poor forecast skill, and that the problem is caused by a failure in the modeling of the stratospheric energy cycle in current general circulation models independent of vertical resolution.

  7. A Track Initiation Method for the Underwater Target Tracking Environment

    NASA Astrophysics Data System (ADS)

    Li, Dong-dong; Lin, Yang; Zhang, Yao

    2018-04-01

    A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.

  8. Paradigm Shifts in Voluntary Force Control and Motor Unit Behaviors with the Manipulated Size of Visual Error Perception

    PubMed Central

    Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou

    2017-01-01

    The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs. PMID:28348530

  9. Paradigm Shifts in Voluntary Force Control and Motor Unit Behaviors with the Manipulated Size of Visual Error Perception.

    PubMed

    Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou

    2017-01-01

    The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs.

  10. Authenticating concealed private data while maintaining concealment

    DOEpatents

    Thomas, Edward V [Albuquerque, NM; Draelos, Timothy J [Albuquerque, NM

    2007-06-26

    A method of and system for authenticating concealed and statistically varying multi-dimensional data comprising: acquiring an initial measurement of an item, wherein the initial measurement is subject to measurement error; applying a transformation to the initial measurement to generate reference template data; acquiring a subsequent measurement of an item, wherein the subsequent measurement is subject to measurement error; applying the transformation to the subsequent measurement; and calculating a Euclidean distance metric between the transformed measurements; wherein the calculated Euclidean distance metric is identical to a Euclidean distance metric between the measurement prior to transformation.

  11. Hindcasting of decadal‐timescale estuarine bathymetric change with a tidal‐timescale model

    USGS Publications Warehouse

    Ganju, Neil K.; Schoellhamer, David H.; Jaffe, Bruce E.

    2009-01-01

    Hindcasting decadal-timescale bathymetric change in estuaries is prone to error due to limited data for initial conditions, boundary forcing, and calibration; computational limitations further hinder efforts. We developed and calibrated a tidal-timescale model to bathymetric change in Suisun Bay, California, over the 1867–1887 period. A general, multiple-timescale calibration ensured robustness over all timescales; two input reduction methods, the morphological hydrograph and the morphological acceleration factor, were applied at the decadal timescale. The model was calibrated to net bathymetric change in the entire basin; average error for bathymetric change over individual depth ranges was 37%. On a model cell-by-cell basis, performance for spatial amplitude correlation was poor over the majority of the domain, though spatial phase correlation was better, with 61% of the domain correctly indicated as erosional or depositional. Poor agreement was likely caused by the specification of initial bed composition, which was unknown during the 1867–1887 period. Cross-sectional bathymetric change between channels and flats, driven primarily by wind wave resuspension, was modeled with higher skill than longitudinal change, which is driven in part by gravitational circulation. The accelerated response of depth may have prevented gravitational circulation from being represented properly. As performance criteria became more stringent in a spatial sense, the error of the model increased. While these methods are useful for estimating basin-scale sedimentation changes, they may not be suitable for predicting specific locations of erosion or deposition. They do, however, provide a foundation for realistic estuarine geomorphic modeling applications.

  12. Performing a reaching task with one arm while adapting to a visuomotor rotation with the other can lead to complete transfer of motor learning across the arms

    PubMed Central

    Lei, Yuming; Binder, Jeffrey R.

    2015-01-01

    The extent to which motor learning is generalized across the limbs is typically very limited. Here, we investigated how two motor learning hypotheses could be used to enhance the extent of interlimb transfer. According to one hypothesis, we predicted that reinforcement of successful actions by providing binary error feedback regarding task success or failure, in addition to terminal error feedback, during initial training would increase the extent of interlimb transfer following visuomotor adaptation (experiment 1). According to the other hypothesis, we predicted that performing a reaching task repeatedly with one arm without providing performance feedback (which prevented learning the task with this arm), while concurrently adapting to a visuomotor rotation with the other arm, would increase the extent of transfer (experiment 2). Results indicate that providing binary error feedback, compared with continuous visual feedback that provided movement direction and amplitude information, had no influence on the extent of transfer. In contrast, repeatedly performing (but not learning) a specific task with one arm while visuomotor adaptation occurred with the other arm led to nearly complete transfer. This suggests that the absence of motor instances associated with specific effectors and task conditions is the major reason for limited interlimb transfer and that reinforcement of successful actions during initial training is not beneficial for interlimb transfer. These findings indicate crucial contributions of effector- and task-specific motor instances, which are thought to underlie (a type of) model-free learning, to optimal motor learning and interlimb transfer. PMID:25632082

  13. The Topology of Large-Scale Structure in the 1.2 Jy IRAS Redshift Survey

    NASA Astrophysics Data System (ADS)

    Protogeros, Zacharias A. M.; Weinberg, David H.

    1997-11-01

    We measure the topology (genus) of isodensity contour surfaces in volume-limited subsets of the 1.2 Jy IRAS redshift survey, for smoothing scales λ = 4, 7, and 12 h-1 Mpc. At 12 h-1 Mpc, the observed genus curve has a symmetric form similar to that predicted for a Gaussian random field. At the shorter smoothing lengths, the observed genus curve shows a modest shift in the direction of an isolated cluster or ``meatball'' topology. We use mock catalogs drawn from cosmological N-body simulations to investigate the systematic biases that affect topology measurements in samples of this size and to determine the full covariance matrix of the expected random errors. We incorporate the error correlations into our evaluations of theoretical models, obtaining both frequentist assessments of absolute goodness of fit and Bayesian assessments of models' relative likelihoods. We compare the observed topology of the 1.2 Jy survey to the predictions of dynamically evolved, unbiased, gravitational instability models that have Gaussian initial conditions. The model with an n = -1 power-law initial power spectrum achieves the best overall agreement with the data, though models with a low-density cold dark matter power spectrum and an n = 0 power-law spectrum are also consistent. The observed topology is inconsistent with an initially Gaussian model that has n = -2, and it is strongly inconsistent with a Voronoi foam model, which has a non-Gaussian, bubble topology.

  14. A theory of stationarity and asymptotic approach in dissipative systems

    NASA Astrophysics Data System (ADS)

    Rubel, Michael Thomas

    2007-05-01

    The approximate dynamics of many physical phenomena, including turbulence, can be represented by dissipative systems of ordinary differential equations. One often turns to numerical integration to solve them. There is an incompatibility, however, between the answers it can produce (i.e., specific solution trajectories) and the questions one might wish to ask (e.g., what behavior would be typical in the laboratory?) To determine its outcome, numerical integration requires more detailed initial conditions than a laboratory could normally provide. In place of initial conditions, experiments stipulate how tests should be carried out: only under statistically stationary conditions, for example, or only during asymptotic approach to a final state. Stipulations such as these, rather than initial conditions, are what determine outcomes in the laboratory.This theoretical study examines whether the points of view can be reconciled: What is the relationship between one's statistical stipulations for how an experiment should be carried out--stationarity or asymptotic approach--and the expected results? How might those results be determined without invoking initial conditions explicitly?To answer these questions, stationarity and asymptotic approach conditions are analyzed in detail. Each condition is treated as a statistical constraint on the system--a restriction on the probability density of states that might be occupied when measurements take place. For stationarity, this reasoning leads to a singular, invariant probability density which is already familiar from dynamical systems theory. For asymptotic approach, it leads to a new, more regular probability density field. A conjecture regarding what appears to be a limit relationship between the two densities is presented.By making use of the new probability densities, one can derive output statistics directly, avoiding the need to create or manipulate initial data, and thereby avoiding the conceptual incompatibility mentioned above. This approach also provides a clean way to derive reduced-order models, complete with local and global error estimates, as well as a way to compare existing reduced-order models objectively.The new approach is explored in the context of five separate test problems: a trivial one-dimensional linear system, a damped unforced linear oscillator in two dimensions, the isothermal Rayleigh-Plesset equation, Lorenz's equations, and the Stokes limit of Burgers' equation in one space dimension. In each case, various output statistics are deduced without recourse to initial conditions. Further, reduced-order models are constructed for asymptotic approach of the damped unforced linear oscillator, the isothermal Rayleigh-Plesset system, and Lorenz's equations, and for stationarity of Lorenz's equations.

  15. The influence of LED lighting on task accuracy: time of day, gender and myopia effects

    NASA Astrophysics Data System (ADS)

    Rao, Feng; Chan, A. H. S.; Zhu, Xi-Fang

    2017-07-01

    In this research, task errors were obtained during performance of a marker location task in which the markers were shown on a computer screen under nine LED lighting conditions; three illuminances (100, 300 and 500 lx) and three color temperatures (3000, 4500 and 6500 K). A total of 47 students participated voluntarily in these tasks. The results showed that task errors in the morning were small and nearly constant across the nine lighting conditions. However in the afternoon, the task errors were significantly larger and varied across lighting conditions. The largest errors for the afternoon session occurred when the color temperature was 4500 K and illuminance 500 lx. There were significant differences between task errors in the morning and afternoon sessions. No significant difference between females and males was found. Task errors for high myopia students were significantly larger than for the low myopia students under the same lighting conditions. In summary, the influence of LED lighting on task accuracy during office hours was not gender dependent, but was time of day and myopia dependent.

  16. Neurochemical enhancement of conscious error awareness.

    PubMed

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  17. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    NASA Astrophysics Data System (ADS)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  18. Pilot Age and Error in Air-Taxi Crashes

    PubMed Central

    Rebok, George W.; Qiang, Yandong; Baker, Susan P.; Li, Guohua

    2010-01-01

    Introduction The associations of pilot error with the type of flight operations and basic weather conditions are well documented. The correlation between pilot characteristics and error is less clear. This study aims to examine whether pilot age is associated with the prevalence and patterns of pilot error in air-taxi crashes. Methods Investigation reports from the National Transportation Safety Board for crashes involving non-scheduled Part 135 operations (i.e., air taxis) in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Crash circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Results Of the 1751 air-taxi crashes studied, 28% resulted from mechanical failure, 25% from loss of control at landing or takeoff, 7% from visual flight rule conditions into instrument meteorological conditions, 7% from fuel starvation, 5% from taxiing, and 28% from other causes. Crashes among older pilots were more likely to occur during the daytime rather than at night and off airport than on airport. The patterns of pilot error in air-taxi crashes were similar across age groups. Of the errors identified, 27% were flawed decisions, 26% were inattentiveness, 23% mishandled aircraft kinetics, 15% mishandled wind and/or runway conditions, and 11% were others. Conclusions Pilot age is associated with crash circumstances but not with the prevalence and patterns of pilot error in air-taxi crashes. Lack of age-related differences in pilot error may be attributable to the “safe worker effect.” PMID:19601508

  19. Quantitative conditions for time evolution in terms of the von Neumann equation

    NASA Astrophysics Data System (ADS)

    Wang, WenHua; Cao, HuaiXin; Chen, ZhengLi; Wang, Lie

    2018-07-01

    The adiabatic theorem describes the time evolution of the pure state and gives an adiabatic approximate solution to the Schödinger equation by choosing a single eigenstate of the Hamiltonian as the initial state. In quantum systems, states are divided into pure states (unite vectors) and mixed states (density matrices, i.e., positive operators with trace one). Accordingly, mixed states have their own corresponding time evolution, which is described by the von Neumann equation. In this paper, we discuss the quantitative conditions for the time evolution of mixed states in terms of the von Neumann equation. First, we introduce the definitions for uniformly slowly evolving and δ-uniformly slowly evolving with respect to mixed states, then we present a necessary and sufficient condition for the Hamiltonian of the system to be uniformly slowly evolving and we obtain some upper bounds for the adiabatic approximate error. Lastly, we illustrate our results in an example.

  20. Analysis of modal behavior at frequency cross-over

    NASA Astrophysics Data System (ADS)

    Costa, Robert N., Jr.

    1994-11-01

    The existence of the mode crossing condition is detected and analyzed in the Active Control of Space Structures Model 4 (ACOSS4). The condition is studied for its contribution to the inability of previous algorithms to successfully optimize the structure and converge to a feasible solution. A new algorithm is developed to detect and correct for mode crossings. The existence of the mode crossing condition is verified in ACOSS4 and found not to have appreciably affected the solution. The structure is then successfully optimized using new analytic methods based on modal expansion. An unrelated error in the optimization algorithm previously used is verified and corrected, thereby equipping the optimization algorithm with a second analytic method for eigenvector differentiation based on Nelson's Method. The second structure is the Control of Flexible Structures (COFS). The COFS structure is successfully reproduced and an initial eigenanalysis completed.

  1. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    DOE PAGES

    Locatelli, R.; Bousquet, P.; Chevallier, F.; ...

    2013-10-08

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10more » synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr -1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr -1 in North America to 7 Tg yr -1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems.« less

  2. A New Method for Calculating Counts in Cells

    NASA Astrophysics Data System (ADS)

    Szapudi, István

    1998-04-01

    In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.

  3. The hybrid delay task: Can capuchin monkeys (Cebus apella) sustain a delay after an initial choice to do so?

    PubMed Central

    Paglieri, Fabio; Focaroli, Valentina; Bramlett, Jessica; Tierno, Valeria; McIntyre, Joseph M.; Addessi, Elsa; Evans, Theodore A.; Beran, Michael J.

    2013-01-01

    Choosing to wait for a better outcome (delay choice) and sustaining the delay prior to that outcome (delay maintenance) are both prerequisites for successful self control in intertemporal choices. However, most existing experimental methods test these skills in isolation from each other, and no significant correlation has been observed in performance across these tasks. In this study we introduce a new paradigm, the hybrid delay task, which combines an initial delay choice with a subsequent delay maintenance stage. This allows testing how often choosing to wait is paired with the actual ability to do so. We tested 18 capuchin monkeys (Cebus apella) from two laboratories in various conditions, and we found that subjects frequently chose the delayed reward but then failed to wait for it, due to poor delay maintenance. However, performance improved with experience and different behavioral responses for error correction were evident. These findings have far reaching implications: if such a high error rate was observed also in other species (possibly including Homo sapiens), this may indicate that delay choice tasks that make use of salient, prepotent stimuli do not reliably assess generalized self control, insofar as choosing to wait does not entail always being able to do so. PMID:23274585

  4. Consistency and convergence for numerical radiation conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1990-01-01

    The problem of imposing radiation conditions at artificial boundaries for the numerical simulation of wave propagation is considered. Emphasis is on the behavior and analysis of the error which results from the restriction of the domain. The theory of error estimation is briefly outlined for boundary conditions. Use is made of the asymptotic analysis of propagating wave groups to derive and analyze boundary operators. For dissipative problems this leads to local, accurate conditions, but falls short in the hyperbolic case. A numerical experiment on the solution of the wave equation with cylindrical symmetry is described. A unified presentation of a number of conditions which have been proposed in the literature is given and the time dependence of the error which results from their use is displayed. The results are in qualitative agreement with theoretical considerations. It was found, however, that for this model problem it is particularly difficult to force the error to decay rapidly in time.

  5. Topics in spectral methods

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1985-01-01

    After detailing the construction of spectral approximations to time-dependent mixed initial boundary value problems, a study is conducted of differential equations of the form 'partial derivative of u/partial derivative of t = Lu + f', where for each t, u(t) belongs to a Hilbert space such that u satisfies homogeneous boundary conditions. For the sake of simplicity, it is assumed that L is an unbounded, time-independent linear operator. Attention is given to Fourier methods of both Galerkin and pseudospectral method types, the Galerkin method, the pseudospectral Chebyshev and Legendre methods, the error equation, hyperbolic partial differentiation equations, and time discretization and iterative methods.

  6. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  7. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  8. Medication safety initiative in reducing medication errors.

    PubMed

    Nguyen, Elisa E; Connolly, Phyllis M; Wong, Vivian

    2010-01-01

    The purpose of the study was to evaluate whether a Medication Pass Time Out initiative was effective and sustainable in reducing medication administration errors. A retrospective descriptive method was used for this research, where a structured Medication Pass Time Out program was implemented following staff and physician education. As a result, the rate of interruptions during the medication administration process decreased from 81% to 0. From the observations at baseline, 6 months, and 1 year after implementation, the percent of doses of medication administered without interruption improved from 81% to 99%. Medication doses administered without errors at baseline, 6 months, and 1 year improved from 98% to 100%.

  9. Workshops Increase Students' Proficiency at Identifying General and APA-Style Writing Errors

    ERIC Educational Resources Information Center

    Jorgensen, Terrence D.; Marek, Pam

    2013-01-01

    To determine the effectiveness of 20- to 30-min workshops on recognition of errors in American Psychological Association-style writing, 58 introductory psychology students attended one of the three workshops (on grammar, mechanics, or references) and completed error recognition tests (pretest, initial posttest, and three follow-up tests). As a…

  10. The impact of satellite temperature soundings on the forecasts of a small national meteorological service

    NASA Technical Reports Server (NTRS)

    Wolfson, N.; Thomasell, A.; Alperson, Z.; Brodrick, H.; Chang, J. T.; Gruber, A.; Ohring, G.

    1984-01-01

    The impact of introducing satellite temperature sounding data on a numerical weather prediction model of a national weather service is evaluated. A dry five level, primitive equation model which covers most of the Northern Hemisphere, is used for these experiments. Series of parallel forecast runs out to 48 hours are made with three different sets of initial conditions: (1) NOSAT runs, only conventional surface and upper air observations are used; (2) SAT runs, satellite soundings are added to the conventional data over oceanic regions and North Africa; and (3) ALLSAT runs, the conventional upper air observations are replaced by satellite soundings over the entire model domain. The impact on the forecasts is evaluated by three verification methods: the RMS errors in sea level pressure forecasts, systematic errors in sea level pressure forecasts, and errors in subjective forecasts of significant weather elements for a selected portion of the model domain. For the relatively short range of the present forecasts, the major beneficial impacts on the sea level pressure forecasts are found precisely in those areas where the satellite sounding are inserted and where conventional upper air observations are sparse. The RMS and systematic errors are reduced in these regions. The subjective forecasts of significant weather elements are improved with the use of the satellite data. It is found that the ALLSAT forecasts are of a quality comparable to the SAR forecasts.

  11. Effect of gyro verticality error on lateral autoland tracking performance for an inertially smoothed control law

    NASA Technical Reports Server (NTRS)

    Thibodeaux, J. J.

    1977-01-01

    The results of a simulation study performed to determine the effects of gyro verticality error on lateral autoland tracking and landing performance are presented. A first order vertical gyro error model was used to generate the measurement of the roll attitude feedback signal normally supplied by an inertial navigation system. The lateral autoland law used was an inertially smoothed control design. The effect of initial angular gyro tilt errors (2 deg, 3 deg, 4 deg, and 5 deg), introduced prior to localizer capture, were investigated by use of a small perturbation aircraft simulation. These errors represent the deviations which could occur in the conventional attitude sensor as a result of the maneuver-induced spin-axis misalinement and drift. Results showed that for a 1.05 deg per minute erection rate and a 5 deg initial tilt error, ON COURSE autoland control logic was not satisfied. Failure to attain the ON COURSE mode precluded high control loop gains and localizer beam path integration and resulted in unacceptable beam standoff at touchdown.

  12. The Effect of Divided Attention on Inhibiting the Gravity Error

    ERIC Educational Resources Information Center

    Hood, Bruce M.; Wilson, Alice; Dyson, Sally

    2006-01-01

    Children who could overcome the gravity error on Hood's (1995) tubes task were tested in a condition where they had to monitor two falling balls. This condition significantly impaired search performance with the majority of mistakes being gravity errors. In a second experiment, the effect of monitoring two balls was compared in the tubes task and…

  13. SU-E-T-191: PITSTOP: Process Improvement Techniques, Software Tools, and Operating Principles for a Quality Initiative Discovery Framework.

    PubMed

    Siochi, R

    2012-06-01

    To develop a quality initiative discovery framework using process improvement techniques, software tools and operating principles. Process deviations are entered into a radiotherapy incident reporting database. Supervisors use an in-house Event Analysis System (EASy) to discuss incidents with staff. Major incidents are analyzed with an in-house Fault Tree Analysis (FTA). A meta-Analysis is performed using association, text mining, key word clustering, and differential frequency analysis. A key operating principle encourages the creation of forcing functions via rapid application development. 504 events have been logged this past year. The results for the key word analysis indicate that the root cause for the top ranked key words was miscommunication. This was also the root cause found from association analysis, where 24% of the time that an event involved a physician it also involved a nurse. Differential frequency analysis revealed that sharp peaks at week 27 were followed by 3 major incidents, two of which were dose related. The peak was largely due to the front desk which caused distractions in other areas. The analysis led to many PI projects but there is still a major systematic issue with the use of forms. The solution we identified is to implement Smart Forms to perform error checking and interlocking. Our first initiative replaced our daily QA checklist with a form that uses custom validation routines, preventing therapists from proceeding with treatments until out of tolerance conditions are corrected. PITSTOP has increased the number of quality initiatives in our department, and we have discovered or confirmed common underlying causes of a variety of seemingly unrelated errors. It has motivated the replacement of all forms with smart forms. © 2012 American Association of Physicists in Medicine.

  14. Robust fixed-time synchronization for uncertain complex-valued neural networks with discontinuous activation functions.

    PubMed

    Ding, Xiaoshuai; Cao, Jinde; Alsaedi, Ahmed; Alsaadi, Fuad E; Hayat, Tasawar

    2017-06-01

    This paper is concerned with the fixed-time synchronization for a class of complex-valued neural networks in the presence of discontinuous activation functions and parameter uncertainties. Fixed-time synchronization not only claims that the considered master-slave system realizes synchronization within a finite time segment, but also requires a uniform upper bound for such time intervals for all initial synchronization errors. To accomplish the target of fixed-time synchronization, a novel feedback control procedure is designed for the slave neural networks. By means of the Filippov discontinuity theories and Lyapunov stability theories, some sufficient conditions are established for the selection of control parameters to guarantee synchronization within a fixed time, while an upper bound of the settling time is acquired as well, which allows to be modulated to predefined values independently on initial conditions. Additionally, criteria of modified controller for assurance of fixed-time anti-synchronization are also derived for the same system. An example is included to illustrate the proposed methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Cognitive decision errors and organization vulnerabilities in nuclear power plant safety management: Modeling using the TOGA meta-theory framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappelli, M.; Gadomski, A. M.; Sepiellis, M.

    In the field of nuclear power plant (NPP) safety modeling, the perception of the role of socio-cognitive engineering (SCE) is continuously increasing. Today, the focus is especially on the identification of human and organization decisional errors caused by operators and managers under high-risk conditions, as evident by analyzing reports on nuclear incidents occurred in the past. At present, the engineering and social safety requirements need to enlarge their domain of interest in such a way to include all possible losses generating events that could be the consequences of an abnormal state of a NPP. Socio-cognitive modeling of Integrated Nuclear Safetymore » Management (INSM) using the TOGA meta-theory has been discussed during the ICCAP 2011 Conference. In this paper, more detailed aspects of the cognitive decision-making and its possible human errors and organizational vulnerability are presented. The formal TOGA-based network model for cognitive decision-making enables to indicate and analyze nodes and arcs in which plant operators and managers errors may appear. The TOGA's multi-level IPK (Information, Preferences, Knowledge) model of abstract intelligent agents (AIAs) is applied. In the NPP context, super-safety approach is also discussed, by taking under consideration unexpected events and managing them from a systemic perspective. As the nature of human errors depends on the specific properties of the decision-maker and the decisional context of operation, a classification of decision-making using IPK is suggested. Several types of initial situations of decision-making useful for the diagnosis of NPP operators and managers errors are considered. The developed models can be used as a basis for applications to NPP educational or engineering simulators to be used for training the NPP executive staff. (authors)« less

  16. The Role of Moist Processes in the Intrinsic Predictability of Indian Ocean Cyclones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taraphdar, Sourav; Mukhopadhyay, P.; Leung, Lai-Yung R.

    The role of moist processes and the possibility of error cascade from cloud scale processes affecting the intrinsic predictable time scale of a high resolution convection permitting model within the environment of tropical cyclones (TCs) over the Indian region are investigated. Consistent with past studies of extra-tropical cyclones, it is demonstrated that moist processes play a major role in forecast error growth which may ultimately limit the intrinsic predictability of the TCs. Small errors in the initial conditions may grow rapidly and cascades from smaller scales to the larger scales through strong diabatic heating and nonlinearities associated with moist convection.more » Results from a suite of twin perturbation experiments for four tropical cyclones suggest that the error growth is significantly higher in cloud permitting simulation at 3.3 km resolutions compared to simulations at 3.3 km and 10 km resolution with parameterized convection. Convective parameterizations with prescribed convective time scales typically longer than the model time step allows the effects of microphysical tendencies to average out so convection responds to a smoother dynamical forcing. Without convective parameterizations, the finer-scale instabilities resolved at 3.3 km resolution and stronger vertical motion that results from the cloud microphysical parameterizations removing super-saturation at each model time step can ultimately feed the error growth in convection permitting simulations. This implies that careful considerations and/or improvements in cloud parameterizations are needed if numerical predictions are to be improved through increased model resolution. Rapid upscale error growth from convective scales may ultimately limit the intrinsic mesoscale predictability of the TCs, which further supports the needs for probabilistic forecasts of these events, even at the mesoscales.« less

  17. Effective force control by muscle synergies

    PubMed Central

    Berger, Denise J.; d'Avella, Andrea

    2014-01-01

    Muscle synergies have been proposed as a way for the central nervous system (CNS) to simplify the generation of motor commands and they have been shown to explain a large fraction of the variation in the muscle patterns across a variety of conditions. However, whether human subjects are able to control forces and movements effectively with a small set of synergies has not been tested directly. Here we show that muscle synergies can be used to generate target forces in multiple directions with the same accuracy achieved using individual muscles. We recorded electromyographic (EMG) activity from 13 arm muscles and isometric hand forces during a force reaching task in a virtual environment. From these data we estimated the force associated to each muscle by linear regression and we identified muscle synergies by non-negative matrix factorization. We compared trajectories of a virtual mass displaced by the force estimated using the entire set of recorded EMGs to trajectories obtained using 4–5 muscle synergies. While trajectories were similar, when feedback was provided according to force estimated from recorded EMGs (EMG-control) on average trajectories generated with the synergies were less accurate. However, when feedback was provided according to recorded force (force-control) we did not find significant differences in initial angle error and endpoint error. We then tested whether synergies could be used as effectively as individual muscles to control cursor movement in the force reaching task by providing feedback according to force estimated from the projection of the recorded EMGs into synergy space (synergy-control). Human subjects were able to perform the task immediately after switching from force-control to EMG-control and synergy-control and we found no differences between initial movement direction errors and endpoint errors in all control modes. These results indicate that muscle synergies provide an effective strategy for motor coordination. PMID:24860489

  18. Corrections of clinical chemistry test results in a laboratory information system.

    PubMed

    Wang, Sihe; Ho, Virginia

    2004-08-01

    The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.

  19. African Easterly Waves in 30-day High-Resolution Global Simulations: A Case Study During the 2006 NAMMA Period

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Wu, Man-Li C.

    2010-01-01

    In this study, extended -range (30 -day) high-resolution simulations with the NASA global mesoscale model are conducted to simulate the initiation and propagation of six consecutive African easterly waves (AEWs) from late August to September 2006 and their association with hurricane formation. It is shown that the statistical characteristics of individual AEWs are realistically simulated with larger errors in the 5th and 6th AEWs. Remarkable simulations of a mean African easterly jet (AEJ) are also obtained. Nine additional 30 -day experiments suggest that although land surface processes might contribute to the predictability of the AEJ and AEWs, the initiation and detailed evolution of AEWs still depend on the accurate representation of dynamic and land surface initial conditions and their time -varying nonlinear interactions. Of interest is the potential to extend the lead time for predicting hurricane formation (e.g., a lead time of up to 22 days) as the 4th AEW is realistically simulated.

  20. In-Flight Alignment Using H ∞ Filter for Strapdown INS on Aircraft

    PubMed Central

    Pei, Fu-Jun; Liu, Xuan; Zhu, Li

    2014-01-01

    In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition. PMID:24511300

  1. Interpreting Repeated Temperature-Depth Profiles for Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Bense, Victor F.; Kurylyk, Barret L.; van Daal, Jonathan; van der Ploeg, Martine J.; Carey, Sean K.

    2017-10-01

    Temperature can be used to trace groundwater flows due to thermal disturbances of subsurface advection. Prior hydrogeological studies that have used temperature-depth profiles to estimate vertical groundwater fluxes have either ignored the influence of climate change by employing steady-state analytical solutions or applied transient techniques to study temperature-depth profiles recorded at only a single point in time. Transient analyses of a single profile are predicated on the accurate determination of an unknown profile at some time in the past to form the initial condition. In this study, we use both analytical solutions and a numerical model to demonstrate that boreholes with temperature-depth profiles recorded at multiple times can be analyzed to either overcome the uncertainty associated with estimating unknown initial conditions or to form an additional check for the profile fitting. We further illustrate that the common approach of assuming a linear initial temperature-depth profile can result in significant errors for groundwater flux estimates. Profiles obtained from a borehole in the Veluwe area, Netherlands in both 1978 and 2016 are analyzed for an illustrative example. Since many temperature-depth profiles were collected in the late 1970s and 1980s, these previously profiled boreholes represent a significant and underexploited opportunity to obtain repeat measurements that can be used for similar analyses at other sites around the world.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coble, Jamie; Orton, Christopher; Schwantes, Jon

    Abstract—The Multi-Isotope Process (MIP) Monitor provides an efficient approach to monitoring the process conditions in used nuclear fuel reprocessing facilities to support process verification and validation. The MIP Monitor applies multivariate analysis to gamma spectroscopy of reprocessing streams in order to detect small changes in the gamma spectrum, which may indicate changes in process conditions. This research extends the MIP Monitor by characterizing a used fuel sample after initial dissolution according to the type of reactor of origin (pressurized or boiling water reactor), initial enrichment, burn up, and cooling time. Simulated gamma spectra were used to develop and test threemore » fuel characterization algorithms. The classification and estimation models employed are based on the partial least squares regression (PLS) algorithm. A PLS discriminate analysis model was developed which perfectly classified reactor type. Locally weighted PLS models were fitted on-the-fly to estimate continuous fuel characteristics. Burn up was predicted within 0.1% root mean squared percent error (RMSPE) and both cooling time and initial enrichment within approximately 2% RMSPE. This automated fuel characterization can be used to independently verify operator declarations of used fuel characteristics and inform the MIP Monitor anomaly detection routines at later stages of the fuel reprocessing stream to improve sensitivity to changes in operational parameters and material diversions.« less

  3. Pathloss Calculation Using the Transmission Line Matrix and Finite Difference Time Domain Methods With Coarse Grids

    DOE PAGES

    Nutaro, James; Kuruganti, Teja

    2017-02-24

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  4. A global perspective of the limits of prediction skill based on the ECMWF ensemble

    NASA Astrophysics Data System (ADS)

    Zagar, Nedjeljka

    2016-04-01

    In this talk presents a new model of the global forecast error growth applied to the forecast errors simulated by the ensemble prediction system (ENS) of the ECMWF. The proxy for forecast errors is the total spread of the ECMWF operational ensemble forecasts obtained by the decomposition of the wind and geopotential fields in the normal-mode functions. In this way, the ensemble spread can be quantified separately for the balanced and inertio-gravity (IG) modes for every forecast range. Ensemble reliability is defined for the balanced and IG modes comparing the ensemble spread with the control analysis in each scale. The results show that initial uncertainties in the ECMWF ENS are largest in the tropical large-scale modes and their spatial distribution is similar to the distribution of the short-range forecast errors. Initially the ensemble spread grows most in the smallest scales and in the synoptic range of the IG modes but the overall growth is dominated by the increase of spread in balanced modes in synoptic and planetary scales in the midlatitudes. During the forecasts, the distribution of spread in the balanced and IG modes grows towards the climatological spread distribution characteristic of the analyses. The ENS system is found to be somewhat under-dispersive which is associated with the lack of tropical variability, primarily the Kelvin waves. The new model of the forecast error growth has three fitting parameters to parameterize the initial fast growth and a more slow exponential error growth later on. The asymptotic values of forecast errors are independent of the exponential growth rate. It is found that the asymptotic values of the errors due to unbalanced dynamics are around 10 days while the balanced and total errors saturate in 3 to 4 weeks. Reference: Žagar, N., R. Buizza, and J. Tribbia, 2015: A three-dimensional multivariate modal analysis of atmospheric predictability with application to the ECMWF ensemble. J. Atmos. Sci., 72, 4423-4444.

  5. How Do Vision and Hearing Impact Pedestrian Time-to-Arrival Judgments?

    PubMed Central

    Roper, JulieAnne M.; Hassan, Shirin E.

    2014-01-01

    Purpose To determine how accurate normally-sighted male and female pedestrians were at making time-to-arrival (TTA) judgments of approaching vehicles when using just their hearing or both their hearing and vision. Methods Ten male and 14 female subjects with confirmed normal vision and hearing estimated the TTA of approaching vehicles along an unsignalized street under two sensory conditions: (i) using both habitual vision and hearing; and (ii) using habitual hearing only. All subjects estimated how long the approaching vehicle would take to reach them (ie the TTA). The actual TTA of vehicles was also measured using custom made sensors. The error in TTA judgments for each subject under each sensory condition was calculated as the difference between the actual and estimated TTA. A secondary timing experiment was also conducted to adjust each subject’s TTA judgments for their “internal metronome”. Results Error in TTA judgments changed significantly as a function of both the actual TTA (p<0.0001) and sensory condition (p<0.0001). While no main effect for gender was found (p=0.19), the way the TTA judgments varied within each sensory condition for each gender was different (p<0.0001). Females tended to be as accurate under either condition (p≥0.01) with the exception of TTA judgments made when the actual TTA was two seconds or less and eight seconds or longer, during which the vision and hearing condition was more accurate (p≤0.002). Males made more accurate TTA judgments under the hearing only condition for actual TTA values five seconds or less (p<0.0001), after which there were no significant differences between the two conditions (p≥0.01). Conclusions Our data suggests that males and females use visual and auditory information differently when making TTA judgments. While the sensory condition did not affect the females’ accuracy in judgments, males initially tended to be more accurate when using their hearing only. PMID:24509543

  6. "Bad Luck Mutations": DNA Mutations Are not the Whole Answer to Understanding Cancer Risk.

    PubMed

    Trosko, James E; Carruba, Giuseppe

    2017-01-01

    It has been proposed that many human cancers are generated by intrinsic mechanisms that produce "Bad Luck" mutations by the proliferation of organ-specific adult stem cells. There have been serious challenges to this interpretation, including multiple extrinsic factors thought to be correlated with mutations found in cancers associated with these exposures. While support for both interpretations provides some validity, both interpretations ignore several concepts of the multistage, multimechanism process of carcinogenesis, namely, (1) mutations can be generated by both "errors of DNA repair" and "errors of DNA replication," during the "initiation" process of carcinogenesis; (2) "initiated" stem cells must be clonally amplified by nonmutagenic, intrinsic or extrinsic epigenetic mechanisms; (3) organ-specific stem cell numbers can be modified during in utero development, thereby altering the risk to cancer later in life; and (4) epigenetic tumor promoters are characterized by species, individual genetic-, gender-, developmental state-specificities, and threshold levels to be active; sustained and long-term exposures; and exposures in the absence of antioxidant "antipromoters." Because of the inevitability of some of the stem cells generating "initiating" mutations by either "errors of DNA repair" or "errors of DNA replication," a tumor is formed depending on the promotion phase of carcinogenesis. While it is possible to reduce our frequencies of mutagenic "initiated" cells, one can never reduce it to zero. Because of the extended period of the promotion phase of carcinogenesis, strategies to reduce the appearance of cancers must involve the interruption of the promotion of these initiated cells.

  7. Nonlinear system controller design based on domain of attaction: An application to CELSS analysis and control

    NASA Technical Reports Server (NTRS)

    Babcock, P. S., IV

    1986-01-01

    Nonlinear system controller design based on the domain of attraction is presented. This is particularly suited to investigating Closed Ecological Life Support Systems (CELSS) models. In particular, the dynamic consequences of changes in the waste storage capacity and system mass, and how information is used for control in CELSS models are examined. The models' high dimensionality and nonlinear state equations make them difficult to analyze by any other technique. The domain of attraction is the region in initial conditions that tend toward an attractor and it is delineated by randomly selecting initial conditions from the region of state space being investigated. Error analysis is done by repeating the domain simulations with independent samples. A refinement of this region is the domain of performance which is the region of initial conditions meeting a performance criteria. In nonlinear systems, local stability does not insure stability over a larger region. The domain of attraction marks out this stability region; hence, it can be considered a measure of a nonlinear system's ability to recovery from state perturbations. Considering random perturbations, the minimum radius of the domain is a measure of the magnitude of perturbations for which recovery is guaranteed. Design of both linear and nonlinear controllers are shown. Three CELSS models, with 9 to 30 state variable, are presented. Measures of the domain of attraction are used to show the global behavior of these models under a variety of design and controller scenarios.

  8. Closed-loop stability of linear quadratic optimal systems in the presence of modeling errors

    NASA Technical Reports Server (NTRS)

    Toda, M.; Patel, R.; Sridhar, B.

    1976-01-01

    The well-known stabilizing property of linear quadratic state feedback design is utilized to evaluate the robustness of a linear quadratic feedback design in the presence of modeling errors. Two general conditions are obtained for allowable modeling errors such that the resulting closed-loop system remains stable. One of these conditions is applied to obtain two more particular conditions which are readily applicable to practical situations where a designer has information on the bounds of modeling errors. Relations are established between the allowable parameter uncertainty and the weighting matrices of the quadratic performance index, thereby enabling the designer to select appropriate weighting matrices to attain a robust feedback design.

  9. The causes of and factors associated with prescribing errors in hospital inpatients: a systematic review.

    PubMed

    Tully, Mary P; Ashcroft, Darren M; Dornan, Tim; Lewis, Penny J; Taylor, David; Wass, Val

    2009-01-01

    Prescribing errors are common, they result in adverse events and harm to patients and it is unclear how best to prevent them because recommendations are more often based on surmized rather than empirically collected data. The aim of this systematic review was to identify all informative published evidence concerning the causes of and factors associated with prescribing errors in specialist and non-specialist hospitals, collate it, analyse it qualitatively and synthesize conclusions from it. Seven electronic databases were searched for articles published between 1985-July 2008. The reference lists of all informative studies were searched for additional citations. To be included, a study had to be of handwritten prescriptions for adult or child inpatients that reported empirically collected data on the causes of or factors associated with errors. Publications in languages other than English and studies that evaluated errors for only one disease, one route of administration or one type of prescribing error were excluded. Seventeen papers reporting 16 studies, selected from 1268 papers identified by the search, were included in the review. Studies from the US and the UK in university-affiliated hospitals predominated (10/16 [62%]). The definition of a prescribing error varied widely and the included studies were highly heterogeneous. Causes were grouped according to Reason's model of accident causation into active failures, error-provoking conditions and latent conditions. The active failure most frequently cited was a mistake due to inadequate knowledge of the drug or the patient. Skills-based slips and memory lapses were also common. Where error-provoking conditions were reported, there was at least one per error. These included lack of training or experience, fatigue, stress, high workload for the prescriber and inadequate communication between healthcare professionals. Latent conditions included reluctance to question senior colleagues and inadequate provision of training. Prescribing errors are often multifactorial, with several active failures and error-provoking conditions often acting together to cause them. In the face of such complexity, solutions addressing a single cause, such as lack of knowledge, are likely to have only limited benefit. Further rigorous study, seeking potential ways of reducing error, needs to be conducted. Multifactorial interventions across many parts of the system are likely to be required.

  10. A Comparison of Three Methods for Computing Scale Score Conditional Standard Errors of Measurement. ACT Research Report Series, 2013 (7)

    ERIC Educational Resources Information Center

    Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu

    2013-01-01

    Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…

  11. Observation of non-classical correlations in sequential measurements of photon polarization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.

    2016-10-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.

  12. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Gaze Compensation as a Technique for Improving Hand–Eye Coordination in Prosthetic Vision

    PubMed Central

    Titchener, Samuel A.; Shivdasani, Mohit N.; Fallon, James B.; Petoe, Matthew A.

    2018-01-01

    Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision. PMID:29321945

  14. Feature Migration in Time: Reflection of Selective Attention on Speech Errors

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.

    2012-01-01

    This article describes an initial study of the effect of focused attention on phonological speech errors. In 3 experiments, participants recited 4-word tongue twisters and focused attention on 1 (or none) of the words. The attended word was singled out differently in each experiment; participants were under instructions to avoid errors on the…

  15. The effect of divided attention on novices and experts in laparoscopic task performance.

    PubMed

    Ghazanfar, Mudassar Ali; Cook, Malcolm; Tang, Benjie; Tait, Iain; Alijani, Afshin

    2015-03-01

    Attention is important for the skilful execution of surgery. The surgeon's attention during surgery is divided between surgery and outside distractions. The effect of this divided attention has not been well studied previously. We aimed to compare the effect of dividing attention of novices and experts on a laparoscopic task performance. Following ethical approval, 25 novices and 9 expert surgeons performed a standardised peg transfer task in a laboratory setup under three randomly assigned conditions: silent as control condition and two standardised auditory distracting tasks requiring response (easy and difficult) as study conditions. Human reliability assessment was used for surgical task analysis. Primary outcome measures were correct auditory responses, task time, number of surgical errors and instrument movements. Secondary outcome measures included error rate, error probability and hand specific differences. Non-parametric statistics were used for data analysis. 21109 movements and 9036 total errors were analysed. Novices had increased mean task completion time (seconds) (171 ± 44SD vs. 149 ± 34, p < 0.05), number of total movements (227 ± 27 vs. 213 ± 26, p < 0.05) and number of errors (127 ± 51 vs. 96 ± 28, p < 0.05) during difficult study conditions compared to control. The correct responses to auditory stimuli were less frequent in experts (68 %) compared to novices (80 %). There was a positive correlation between error rate and error probability in novices (r (2) = 0.533, p < 0.05) but not in experts (r (2) = 0.346, p > 0.05). Divided attention conditions in theatre environment require careful consideration during surgical training as the junior surgeons are less able to focus their attention during these conditions.

  16. Strength conditions for the elastic structures with a stress error

    NASA Astrophysics Data System (ADS)

    Matveev, A. D.

    2017-10-01

    As is known, the constraints (strength conditions) for the safety factor of elastic structures and design details of a particular class, e.g. aviation structures are established, i.e. the safety factor values of such structures should be within the given range. It should be noted that the constraints are set for the safety factors corresponding to analytical (exact) solutions of elasticity problems represented for the structures. Developing the analytical solutions for most structures, especially irregular shape ones, is associated with great difficulties. Approximate approaches to solve the elasticity problems, e.g. the technical theories of deformation of homogeneous and composite plates, beams and shells, are widely used for a great number of structures. Technical theories based on the hypotheses give rise to approximate (technical) solutions with an irreducible error, with the exact value being difficult to be determined. In static calculations of the structural strength with a specified small range for the safety factors application of technical (by the Theory of Strength of Materials) solutions is difficult. However, there are some numerical methods for developing the approximate solutions of elasticity problems with arbitrarily small errors. In present paper, the adjusted reference (specified) strength conditions for the structural safety factor corresponding to approximate solution of the elasticity problem have been proposed. The stress error estimation is taken into account using the proposed strength conditions. It has been shown that, to fulfill the specified strength conditions for the safety factor of the given structure corresponding to an exact solution, the adjusted strength conditions for the structural safety factor corresponding to an approximate solution are required. The stress error estimation which is the basis for developing the adjusted strength conditions has been determined for the specified strength conditions. The adjusted strength conditions presented by allowable stresses are suggested. Adjusted strength conditions make it possible to determine the set of approximate solutions, whereby meeting the specified strength conditions. Some examples of the specified strength conditions to be satisfied using the technical (by the Theory of Strength of Materials) solutions and strength conditions have been given, as well as the examples of stress conditions to be satisfied using approximate solutions with a small error.

  17. Effects of robotically modulating kinematic variability on motor skill learning and motivation

    PubMed Central

    Reinkensmeyer, David J.

    2015-01-01

    It is unclear how the variability of kinematic errors experienced during motor training affects skill retention and motivation. We used force fields produced by a haptic robot to modulate the kinematic errors of 30 healthy adults during a period of practice in a virtual simulation of golf putting. On day 1, participants became relatively skilled at putting to a near and far target by first practicing without force fields. On day 2, they warmed up at the task without force fields, then practiced with force fields that either reduced or augmented their kinematic errors and were finally assessed without the force fields active. On day 3, they returned for a long-term assessment, again without force fields. A control group practiced without force fields. We quantified motor skill as the variability in impact velocity at which participants putted the ball. We quantified motivation using a self-reported, standardized scale. Only individuals who were initially less skilled benefited from training; for these people, practicing with reduced kinematic variability improved skill more than practicing in the control condition. This reduced kinematic variability also improved self-reports of competence and satisfaction. Practice with increased kinematic variability worsened these self-reports as well as enjoyment. These negative motivational effects persisted on day 3 in a way that was uncorrelated with actual skill. In summary, robotically reducing kinematic errors in a golf putting training session improved putting skill more for less skilled putters. Robotically increasing kinematic errors had no performance effect, but decreased motivation in a persistent way. PMID:25673732

  18. Effects of robotically modulating kinematic variability on motor skill learning and motivation.

    PubMed

    Duarte, Jaime E; Reinkensmeyer, David J

    2015-04-01

    It is unclear how the variability of kinematic errors experienced during motor training affects skill retention and motivation. We used force fields produced by a haptic robot to modulate the kinematic errors of 30 healthy adults during a period of practice in a virtual simulation of golf putting. On day 1, participants became relatively skilled at putting to a near and far target by first practicing without force fields. On day 2, they warmed up at the task without force fields, then practiced with force fields that either reduced or augmented their kinematic errors and were finally assessed without the force fields active. On day 3, they returned for a long-term assessment, again without force fields. A control group practiced without force fields. We quantified motor skill as the variability in impact velocity at which participants putted the ball. We quantified motivation using a self-reported, standardized scale. Only individuals who were initially less skilled benefited from training; for these people, practicing with reduced kinematic variability improved skill more than practicing in the control condition. This reduced kinematic variability also improved self-reports of competence and satisfaction. Practice with increased kinematic variability worsened these self-reports as well as enjoyment. These negative motivational effects persisted on day 3 in a way that was uncorrelated with actual skill. In summary, robotically reducing kinematic errors in a golf putting training session improved putting skill more for less skilled putters. Robotically increasing kinematic errors had no performance effect, but decreased motivation in a persistent way. Copyright © 2015 the American Physiological Society.

  19. A Unified Data Assimilation Strategy for Regional Coupled Atmosphere-Ocean Prediction Systems

    NASA Astrophysics Data System (ADS)

    Xie, Lian; Liu, Bin; Zhang, Fuqing; Weng, Yonghui

    2014-05-01

    Improving tropical cyclone (TC) forecasts is a top priority in weather forecasting. Assimilating various observational data to produce better initial conditions for numerical models using advanced data assimilation techniques has been shown to benefit TC intensity forecasts, whereas assimilating large-scale environmental circulation into regional models by spectral nudging or Scale-Selective Data Assimilation (SSDA) has been demonstrated to improve TC track forecasts. Meanwhile, taking into account various air-sea interaction processes by high-resolution coupled air-sea modelling systems has also been shown to improve TC intensity forecasts. Despite the advances in data assimilation and air-sea coupled models, large errors in TC intensity and track forecasting remain. For example, Hurricane Nate (2011) has brought considerable challenge for the TC operational forecasting community, with very large intensity forecast errors (27, 25, and 40 kts for 48, 72, and 96 h, respectively) for the official forecasts. Considering the slow-moving nature of Hurricane Nate, it is reasonable to hypothesize that air-sea interaction processes played a critical role in the intensity change of the storm, and accurate representation of the upper ocean dynamics and thermodynamics is necessary to quantitatively describe the air-sea interaction processes. Currently, data assimilation techniques are generally only applied to hurricane forecasting in stand-alone atmospheric or oceanic model. In fact, most of the regional hurricane forecasting models only included data assimilation techniques for improving the initial condition of the atmospheric model. In such a situation, the benefit of adjustments in one model (atmospheric or oceanic) by assimilating observational data can be compromised by errors from the other model. Thus, unified data assimilation techniques for coupled air-sea modelling systems, which not only simultaneously assimilate atmospheric and oceanic observations into the coupled air-sea modelling system, but also nudging the large-scale environmental flow in the regional model towards global model forecasts are of increasing necessity. In this presentation, we will outline a strategy for an integrated approach in air-sea coupled data assimilation and discuss its benefits and feasibility from incremental results for select historical hurricane cases.

  20. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  1. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.

    PubMed

    Frick, Eric; Rahmatalla, Salam

    2018-04-04

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.

  2. A Robust Self-Alignment Method for Ship's Strapdown INS Under Mooring Conditions

    PubMed Central

    Sun, Feng; Lan, Haiyu; Yu, Chunyang; El-Sheimy, Naser; Zhou, Guangtao; Cao, Tong; Liu, Hang

    2013-01-01

    Strapdown inertial navigation systems (INS) need an alignment process to determine the initial attitude matrix between the body frame and the navigation frame. The conventional alignment process is to compute the initial attitude matrix using the gravity and Earth rotational rate measurements. However, under mooring conditions, the inertial measurement unit (IMU) employed in a ship's strapdown INS often suffers from both the intrinsic sensor noise components and the external disturbance components caused by the motions of the sea waves and wind waves, so a rapid and precise alignment of a ship's strapdown INS without any auxiliary information is hard to achieve. A robust solution is given in this paper to solve this problem. The inertial frame based alignment method is utilized to adapt the mooring condition, most of the periodical low-frequency external disturbance components could be removed by the mathematical integration and averaging characteristic of this method. A novel prefilter named hidden Markov model based Kalman filter (HMM-KF) is proposed to remove the relatively high-frequency error components. Different from the digital filters, the HMM-KF barely cause time-delay problem. The turntable, mooring and sea experiments favorably validate the rapidness and accuracy of the proposed self-alignment method and the good de-noising performance of HMM-KF. PMID:23799492

  3. Modified Balance Error Scoring System (M-BESS) test scores in athletes wearing protective equipment and cleats.

    PubMed

    Azad, Aftab Mohammad; Al Juma, Saad; Bhatti, Junaid Ahmad; Delaney, J Scott

    2016-01-01

    Balance testing is an important part of the initial concussion assessment. There is no research on the differences in Modified Balance Error Scoring System (M-BESS) scores when tested in real world as compared to control conditions. To assess the difference in M-BESS scores in athletes wearing their protective equipment and cleats on different surfaces as compared to control conditions. This cross-sectional study examined university North American football and soccer athletes. Three observers independently rated athletes performing the M-BESS test in three different conditions: (1) wearing shorts and T-shirt in bare feet on firm surface (control); (2) wearing athletic equipment with cleats on FieldTurf; and (3) wearing athletic equipment with cleats on firm surface. Mean M-BESS scores were compared between conditions. 60 participants were recruited: 39 from football (all males) and 21 from soccer (11 males and 10 females). Average age was 21.1 years (SD=1.8). Mean M-BESS scores were significantly lower (p<0.001) for cleats on FieldTurf (mean=26.3; SD=2.0) and for cleats on firm surface (mean=26.6; SD=2.1) as compared to the control condition (mean=28.4; SD=1.5). Females had lower scores than males for cleats on FieldTurf condition (24.9 (SD=1.9) vs 27.3 (SD=1.6), p=0.005). Players who had taping or bracing on their ankles/feet had lower scores when tested with cleats on firm surface condition (24.6 (SD=1.7) vs 26.9 (SD=2.0), p=0.002). Total M-BESS scores for athletes wearing protective equipment and cleats standing on FieldTurf or a firm surface are around two points lower than M-BESS scores performed on the same athletes under control conditions.

  4. Modified Balance Error Scoring System (M-BESS) test scores in athletes wearing protective equipment and cleats

    PubMed Central

    Azad, Aftab Mohammad; Al Juma, Saad; Bhatti, Junaid Ahmad; Delaney, J Scott

    2016-01-01

    Background Balance testing is an important part of the initial concussion assessment. There is no research on the differences in Modified Balance Error Scoring System (M-BESS) scores when tested in real world as compared to control conditions. Objective To assess the difference in M-BESS scores in athletes wearing their protective equipment and cleats on different surfaces as compared to control conditions. Methods This cross-sectional study examined university North American football and soccer athletes. Three observers independently rated athletes performing the M-BESS test in three different conditions: (1) wearing shorts and T-shirt in bare feet on firm surface (control); (2) wearing athletic equipment with cleats on FieldTurf; and (3) wearing athletic equipment with cleats on firm surface. Mean M-BESS scores were compared between conditions. Results 60 participants were recruited: 39 from football (all males) and 21 from soccer (11 males and 10 females). Average age was 21.1 years (SD=1.8). Mean M-BESS scores were significantly lower (p<0.001) for cleats on FieldTurf (mean=26.3; SD=2.0) and for cleats on firm surface (mean=26.6; SD=2.1) as compared to the control condition (mean=28.4; SD=1.5). Females had lower scores than males for cleats on FieldTurf condition (24.9 (SD=1.9) vs 27.3 (SD=1.6), p=0.005). Players who had taping or bracing on their ankles/feet had lower scores when tested with cleats on firm surface condition (24.6 (SD=1.7) vs 26.9 (SD=2.0), p=0.002). Conclusions Total M-BESS scores for athletes wearing protective equipment and cleats standing on FieldTurf or a firm surface are around two points lower than M-BESS scores performed on the same athletes under control conditions. PMID:27900181

  5. A virtual pointer to support the adoption of professional vision in laparoscopic training.

    PubMed

    Feng, Yuanyuan; McGowan, Hannah; Semsar, Azin; Zahiri, Hamid R; George, Ivan M; Turner, Timothy; Park, Adrian; Kleinsmith, Andrea; Mentis, Helena M

    2018-05-23

    To assess a virtual pointer in supporting surgical trainees' development of professional vision in laparoscopic surgery. We developed a virtual pointing and telestration system utilizing the Microsoft Kinect movement sensor as an overlay for any imagine system. Training with the application was compared to a standard condition, i.e., verbal instruction with un-mediated gestures, in a laparoscopic training environment. Seven trainees performed four simulated laparoscopic tasks guided by an experienced surgeon as the trainer. Trainee performance was subjectively assessed by the trainee and trainer, and objectively measured by number of errors, time to task completion, and economy of movement. No significant differences in errors and time to task completion were obtained between virtual pointer and standard conditions. Economy of movement in the non-dominant hand was significantly improved when using virtual pointer ([Formula: see text]). The trainers perceived a significant improvement in trainee performance in virtual pointer condition ([Formula: see text]), while the trainees perceived no difference. The trainers' perception of economy of movement was similar between the two conditions in the initial three runs and became significantly improved in virtual pointer condition in the fourth run ([Formula: see text]). Results show that the virtual pointer system improves the trainer's perception of trainee's performance and this is reflected in the objective performance measures in the third and fourth training runs. The benefit of a virtual pointing and telestration system may be perceived by the trainers early on in training, but this is not evident in objective trainee performance until further mastery has been attained. In addition, the performance improvement of economy of motion specifically shows that the virtual pointer improves the adoption of professional vision- improved ability to see and use laparoscopic video results in more direct instrument movement.

  6. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    PubMed Central

    DelSole, T.; Tippett, M.K.; Pegion, K.

    2018-01-01

    Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973

  7. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    NASA Astrophysics Data System (ADS)

    Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.

    2018-04-01

    The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.

  8. A model for the prediction of latent errors using data obtained during the development process

    NASA Technical Reports Server (NTRS)

    Gaffney, J. E., Jr.; Martello, S. J.

    1984-01-01

    A model implemented in a program that runs on the IBM PC for estimating the latent (or post ship) content of a body of software upon its initial release to the user is presented. The model employs the count of errors discovered at one or more of the error discovery processes during development, such as a design inspection, as the input data for a process which provides estimates of the total life-time (injected) error content and of the latent (or post ship) error content--the errors remaining a delivery. The model presented presumes that these activities cover all of the opportunities during the software development process for error discovery (and removal).

  9. Exploring Senior Residents' Intraoperative Error Management Strategies: A Potential Measure of Performance Improvement.

    PubMed

    Law, Katherine E; Ray, Rebecca D; D'Angelo, Anne-Lise D; Cohen, Elaine R; DiMarco, Shannon M; Linsmeier, Elyse; Wiegmann, Douglas A; Pugh, Carla M

    The study aim was to determine whether residents' error management strategies changed across 2 simulated laparoscopic ventral hernia (LVH) repair procedures after receiving feedback on their initial performance. We hypothesize that error detection and recovery strategies would improve during the second procedure without hands-on practice. Retrospective review of participant procedural performances of simulated laparoscopic ventral herniorrhaphy. A total of 3 investigators reviewed procedure videos to identify surgical errors. Errors were deconstructed. Error management events were noted, including error identification and recovery. Residents performed the simulated LVH procedures during a course on advanced laparoscopy. Participants had 30 minutes to complete a LVH procedure. After verbal and simulator feedback, residents returned 24 hours later to perform a different, more difficult simulated LVH repair. Senior (N = 7; postgraduate year 4-5) residents in attendance at the course participated in this study. In the first LVH procedure, residents committed 121 errors (M = 17.14, standard deviation = 4.38). Although the number of errors increased to 146 (M = 20.86, standard deviation = 6.15) during the second procedure, residents progressed further in the second procedure. There was no significant difference in the number of errors committed for both procedures, but errors shifted to the late stage of the second procedure. Residents changed the error types that they attempted to recover (χ 2 5 =24.96, p<0.001). For the second procedure, recovery attempts increased for action and procedure errors, but decreased for strategy errors. Residents also recovered the most errors in the late stage of the second procedure (p < 0.001). Residents' error management strategies changed between procedures following verbal feedback on their initial performance and feedback from the simulator. Errors and recovery attempts shifted to later steps during the second procedure. This may reflect residents' error management success in the earlier stages, which allowed further progression in the second simulation. Incorporating error recognition and management opportunities into surgical training could help track residents' learning curve and provide detailed, structured feedback on technical and decision-making skills. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  10. Effects of Parameter Uncertainty on Long-Term Simulations of Lake Alkalinity

    NASA Astrophysics Data System (ADS)

    Lee, Sijin; Georgakakos, Konstantine P.; Schnoor, Jerald L.

    1990-03-01

    A first-order second-moment uncertainty analysis has been applied to two lakes in the Adirondack Park, New York, to assess the long-term response of lakes to acid deposition. Uncertainty due to parameter error and initial condition error was considered. Because the enhanced trickle-down (ETD) model is calibrated with only 3 years of field data and is used to simulate a 50-year period, the uncertainty in the lake alkalinity prediction is relatively large. When a best estimate of parameter uncertainty is used, the annual average alkalinity is predicted to be -11 ±28 μeq/L for Lake Woods and 142 ± 139 μeq/L for Lake Panther after 50 years. Hydrologic parameters and chemical weathering rate constants contributed most to the uncertainty of the simulations. Results indicate that the uncertainty in long-range predictions of lake alkalinity increased significantly over a 5- to 10-year period and then reached a steady state.

  11. Non-Linearity in Wide Dynamic Range CMOS Image Sensors Utilizing a Partial Charge Transfer Technique.

    PubMed

    Shafie, Suhaidi; Kawahito, Shoji; Halin, Izhal Abdul; Hasan, Wan Zuha Wan

    2009-01-01

    The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region.

  12. Opposite initialization to novel cues in dopamine signaling in ventral and posterior striatum in mice

    PubMed Central

    Menegas, William; Babayan, Benedicte M; Uchida, Naoshige; Watabe-Uchida, Mitsuko

    2017-01-01

    Dopamine neurons are thought to encode novelty in addition to reward prediction error (the discrepancy between actual and predicted values). In this study, we compared dopamine activity across the striatum using fiber fluorometry in mice. During classical conditioning, we observed opposite dynamics in dopamine axon signals in the ventral striatum (‘VS dopamine’) and the posterior tail of the striatum (‘TS dopamine’). TS dopamine showed strong excitation to novel cues, whereas VS dopamine showed no responses to novel cues until they had been paired with a reward. TS dopamine cue responses decreased over time, depending on what the cue predicted. Additionally, TS dopamine showed excitation to several types of stimuli including rewarding, aversive, and neutral stimuli whereas VS dopamine showed excitation only to reward or reward-predicting cues. Together, these results demonstrate that dopamine novelty signals are localized in TS along with general salience signals, while VS dopamine reliably encodes reward prediction error. DOI: http://dx.doi.org/10.7554/eLife.21886.001 PMID:28054919

  13. Dopaminergic dysfunction in schizophrenia: salience attribution revisited.

    PubMed

    Heinz, Andreas; Schlagenhauf, Florian

    2010-05-01

    A dysregulation of the mesolimbic dopamine system in schizophrenia patients may lead to aberrant attribution of incentive salience and contribute to the emergence of psychopathological symptoms like delusions. The dopaminergic signal has been conceptualized to represent a prediction error that indicates the difference between received and predicted reward. The incentive salience hypothesis states that dopamine mediates the attribution of "incentive salience" to conditioned cues that predict reward. This hypothesis was initially applied in the context of drug addiction and then transferred to schizophrenic psychosis. It was hypothesized that increased firing (chaotic or stress associated) of dopaminergic neurons in the striatum of schizophrenia patients attributes incentive salience to otherwise irrelevant stimuli. Here, we review recent neuroimaging studies directly addressing this hypothesis. They suggest that neuronal functions associated with dopaminergic signaling, such as the attribution of salience to reward-predicting stimuli and the computation of prediction errors, are indeed altered in schizophrenia patients and that this impairment appears to contribute to delusion formation.

  14. MSG SEVIRI Applications for Weather and Climate: Cloud Properties and Calibrations

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Nguyen, Louis; Smith, William L.; Palikonda, Rabindra; Doelling, David R.; Ayers, J. Kirk; Trepte, Qing Z.; Chang, Fu-Lung

    2006-01-01

    SEVIRI data are cross-calibrated against the corresponding Aqua and Terra MODIS channels. Compared to Terra MODIS, no significant trends are evident in the 0.65, 0.86, and 1.6 micron channel gains between May 2004 and May 2006, indicating excellent stability in the solar-channel sensors. On average, the corresponding Terra reflectances are 12, 14, and 1% greater than the their SEVIRI counterparts. The Terra 3.8- micron channel brightness temperatures T are 7 and 4 K greater than their SEVIRI counterparts during day and night, respectively. The average differences between T for MODIS and SEVIRI 8.6, 10.8, 12.0, and 13.3- micron channels are between 0.5 and 2 K. The cloud properties are being derived hourly over Europe and, in initial comparisons, agree well with surface observations. Errors caused by residual calibration uncertainties, terminator conditions, and inaccurate temperature and humidity profiles are still problematic. Future versions will address those errors and the effects of multilayered clouds.

  15. Gravity compensation in a Strapdown Inertial Navigation System to improve the attitude accuracy

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Wang, Jun; Wang, Xingshu; Yang, Shuai

    2017-10-01

    Attitude errors in a strapdown inertial navigation system due to gravity disturbances and system noises can be relatively large, although they are bound within the Schuler and the Earth rotation period. The principal objective of the investigation is to determine to what extent accurate gravity data can improve the attitude accuracy. The way the gravity disturbances affect the attitude were analyzed and compared with system noises by the analytic solution and simulation. The gravity disturbances affect the attitude accuracy by introducing the initial attitude error and the equivalent accelerometer bias. With the development of the high precision inertial devices and the application of the rotation modulation technology, the gravity disturbance cannot be neglected anymore. The gravity compensation was performed using the EGM2008 and simulations with and without accurate gravity compensation under varying navigation conditions were carried out. The results show that the gravity compensation improves the horizontal components of attitude accuracy evidently while the yaw angle is badly affected by the uncompensated gyro bias in vertical channel.

  16. The theoretical accuracy of Runge-Kutta time discretizations for the initial boundary value problem: A careful study of the boundary error

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun

    1993-01-01

    The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.

  17. Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation

    PubMed Central

    Barbero, Sergio; Thibos, Larry N.

    2007-01-01

    Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302

  18. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting.

    PubMed

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-10-02

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.

  19. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting

    PubMed Central

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-01-01

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099

  20. Neural correlates of sensory prediction errors in monkeys: evidence for internal models of voluntary self-motion in the cerebellum.

    PubMed

    Cullen, Kathleen E; Brooks, Jessica X

    2015-02-01

    During self-motion, the vestibular system makes essential contributions to postural stability and self-motion perception. To ensure accurate perception and motor control, it is critical to distinguish between vestibular sensory inputs that are the result of externally applied motion (exafference) and that are the result of our own actions (reafference). Indeed, although the vestibular sensors encode vestibular afference and reafference with equal fidelity, neurons at the first central stage of sensory processing selectively encode vestibular exafference. The mechanism underlying this reafferent suppression compares the brain's motor-based expectation of sensory feedback with the actual sensory consequences of voluntary self-motion, effectively computing the sensory prediction error (i.e., exafference). It is generally thought that sensory prediction errors are computed in the cerebellum, yet it has been challenging to explicitly demonstrate this. We have recently addressed this question and found that deep cerebellar nuclei neurons explicitly encode sensory prediction errors during self-motion. Importantly, in everyday life, sensory prediction errors occur in response to changes in the effector or world (muscle strength, load, etc.), as well as in response to externally applied sensory stimulation. Accordingly, we hypothesize that altering the relationship between motor commands and the actual movement parameters will result in the updating in the cerebellum-based computation of exafference. If our hypothesis is correct, under these conditions, neuronal responses should initially be increased--consistent with a sudden increase in the sensory prediction error. Then, over time, as the internal model is updated, response modulation should decrease in parallel with a reduction in sensory prediction error, until vestibular reafference is again suppressed. The finding that the internal model predicting the sensory consequences of motor commands adapts for new relationships would have important implications for understanding how responses to passive stimulation endure despite the cerebellum's ability to learn new relationships between motor commands and sensory feedback.

  1. Conditional standard errors of measurement for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

    PubMed

    Price, Larry R; Raju, Nambury; Lurie, Anna; Wilkins, Charles; Zhu, Jianjun

    2006-02-01

    A specific recommendation of the 1999 Standards for Educational and Psychological Testing by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education is that test publishers report estimates of the conditional standard error of measurement (SEM). Procedures for calculating the conditional (score-level) SEM based on raw scores are well documented; however, few procedures have been developed for estimating the conditional SEM of subtest or composite scale scores resulting from a nonlinear transformation. Item response theory provided the psychometric foundation to derive the conditional standard errors of measurement and confidence intervals for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

  2. An investigation of condition mapping and plot proportion calculation issues

    Treesearch

    Demetrios Gatziolis

    2007-01-01

    A systematic examination of Forest Inventory and Analysis condition data collected under the annual inventory protocol in the Pacific Northwest region between 2000 and 2004 revealed the presence of errors both in condition topology and plot proportion computations. When plots were compiled to generate population estimates, proportion errors were found to cause...

  3. Artificial neural network (ANN) approach for modeling of Pb(II) adsorption from aqueous solution by Antep pistachio (Pistacia Vera L.) shells.

    PubMed

    Yetilmezsoy, Kaan; Demirel, Sevgi

    2008-05-30

    A three-layer artificial neural network (ANN) model was developed to predict the efficiency of Pb(II) ions removal from aqueous solution by Antep pistachio (Pistacia Vera L.) shells based on 66 experimental sets obtained in a laboratory batch study. The effect of operational parameters such as adsorbent dosage, initial concentration of Pb(II) ions, initial pH, operating temperature, and contact time were studied to optimise the conditions for maximum removal of Pb(II) ions. On the basis of batch test results, optimal operating conditions were determined to be an initial pH of 5.5, an adsorbent dosage of 1.0 g, an initial Pb(II) concentration of 30 ppm, and a temperature of 30 degrees C. Experimental results showed that a contact time of 45 min was generally sufficient to achieve equilibrium. After backpropagation (BP) training combined with principal component analysis (PCA), the ANN model was able to predict adsorption efficiency with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and a linear transfer function (purelin) at output layer. The Levenberg-Marquardt algorithm (LMA) was found as the best of 11 BP algorithms with a minimum mean squared error (MSE) of 0.000227875. The linear regression between the network outputs and the corresponding targets were proven to be satisfactory with a correlation coefficient of about 0.936 for five model variables used in this study.

  4. The Perception of Error in Production Plants of a Chemical Organisation

    ERIC Educational Resources Information Center

    Seifried, Jurgen; Hopfer, Eva

    2013-01-01

    There is considerable current interest in error-friendly corporate culture, one particular research question being how and under what conditions errors are learnt from in the workplace. This paper starts from the assumption that errors are inevitable and considers key factors which affect learning from errors in high responsibility organisations,…

  5. A framework for software fault tolerance in real-time systems

    NASA Technical Reports Server (NTRS)

    Anderson, T.; Knight, J. C.

    1983-01-01

    A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.

  6. Coupling of a 3D Finite Element Model of Cardiac Ventricular Mechanics to Lumped Systems Models of the Systemic and Pulmonic Circulation

    PubMed Central

    Kerckhoffs, Roy C. P.; Neal, Maxwell L.; Gu, Quan; Bassingthwaighte, James B.; Omens, Jeff H.; McCulloch, Andrew D.

    2010-01-01

    In this study we present a novel, robust method to couple finite element (FE) models of cardiac mechanics to systems models of the circulation (CIRC), independent of cardiac phase. For each time step through a cardiac cycle, left and right ventricular pressures were calculated using ventricular compliances from the FE and CIRC models. These pressures served as boundary conditions in the FE and CIRC models. In succeeding steps, pressures were updated to minimize cavity volume error (FE minus CIRC volume) using Newton iterations. Coupling was achieved when a predefined criterion for the volume error was satisfied. Initial conditions for the multi-scale model were obtained by replacing the FE model with a varying elastance model, which takes into account direct ventricular interactions. Applying the coupling, a novel multi-scale model of the canine cardiovascular system was developed. Global hemodynamics and regional mechanics were calculated for multiple beats in two separate simulations with a left ventricular ischemic region and pulmonary artery constriction, respectively. After the interventions, global hemodynamics changed due to direct and indirect ventricular interactions, in agreement with previously published experimental results. The coupling method allows for simulations of multiple cardiac cycles for normal and pathophysiology, encompassing levels from cell to system. PMID:17111210

  7. Hourly predictive Levenberg-Marquardt ANN and multi linear regression models for predicting of dew point temperature

    NASA Astrophysics Data System (ADS)

    Zounemat-Kermani, Mohammad

    2012-08-01

    In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.

  8. Identification of factors associated with diagnostic error in primary care.

    PubMed

    Minué, Sergio; Bermúdez-Tamayo, Clara; Fernández, Alberto; Martín-Martín, José Jesús; Benítez, Vivian; Melguizo, Miguel; Caro, Araceli; Orgaz, María José; Prados, Miguel Angel; Díaz, José Enrique; Montoro, Rafael

    2014-05-12

    Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.

  9. Identification of factors associated with diagnostic error in primary care

    PubMed Central

    2014-01-01

    Background Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason’s taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Methods Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician’s initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians’ perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. Discussion This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process. PMID:24884984

  10. The role of model errors represented by nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" within ENSO predictions

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2017-04-01

    Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.

  11. Simulation of water-table aquifers using specified saturated thickness

    USGS Publications Warehouse

    Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.

    2014-01-01

    Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.

  12. Interaction of finger enslaving and error compensation in multiple finger force production.

    PubMed

    Martin, Joel R; Latash, Mark L; Zatsiorsky, Vladimir M

    2009-01-01

    Previous studies have documented two patterns of finger interaction during multi-finger pressing tasks, enslaving and error compensation, which do not agree with each other. Enslaving is characterized by positive correlation between instructed (master) and non-instructed (slave) finger(s) while error compensation can be described as a pattern of negative correlation between master and slave fingers. We hypothesize that pattern of finger interaction, enslaving or compensation depends on the initial force level and the magnitude of the targeted force change. Subjects were instructed to press with four fingers (I index, M middle, R ring, and L little) from a specified initial force to target forces following a ramp target line. Force-force relations between master and each of three slave fingers were analyzed during the ramp phase of trials by calculating correlation coefficients within each master-slave pair and then two-factor ANOVA was performed to determine effect of initial force and force increase on the correlation coefficients. It was found that, as initial force increased, the value of the correlation coefficient decreased and in some cases became negative, i.e. the enslaving transformed into error compensation. Force increase magnitude had a smaller effect on the correlation coefficients. The observations support the hypothesis that the pattern of inter-finger interaction--enslaving or compensation--depends on the initial force level and, to a smaller degree, on the targeted magnitude of the force increase. They suggest that the controller views tasks with higher steady-state forces and smaller force changes as implying a requirement to avoid large changes in the total force.

  13. Short-Term Impact of tDCS Over the Right Inferior Frontal Cortex on Impulsive Responses in a Go/No-go Task.

    PubMed

    Campanella, Salvatore; Schroder, Elisa; Vanderhasselt, Marie-Anne; Baeken, Chris; Kornreich, Charles; Verbanck, Paul; Burle, Boris

    2018-05-01

    Inhibitory control, a process deeply studied in laboratory settings, refers to the ability to inhibit an action once it has been initiated. A common way to process data in such tasks is to take the mean response time (RT) and error rate per participant. However, such an analysis ignores the strong dependency between spontaneous RT variations and error rate. Conditional accuracy function (CAF) is of particular interest, as by plotting the probability of a response to be correct as a function of its latency, it provides a means for studying the strength of impulsive responses associated with a higher frequency of fast response errors. This procedure was applied to a recent set of data in which the right inferior frontal gyrus (rIFG) was modulated using transcranial direct current stimulation (tDCS). Healthy participants (n = 40) were presented with a "Go/No-go" task (click on letter M, not on letter W, session 1). Then, one subgroup (n = 20) was randomly assigned to one 20-minutes neuromodulation session with tDCS (anodal electrode, rIFG; cathodal electrode, neck); and the other group (n = 20) to a condition with sham (placebo) tDCS. All participants were finally confronted to the same "Go/No-go" task (session 2). The rate of commission errors (click on W) and speed of response to Go trials were similar between sessions 1 and 2 in both neuromodulation groups. However, CAF showed that active tDCS over rIFG leads to a reduction of the drop in accuracy for fast responses (suggesting less impulsivity and greater inhibitory efficiency), this effect being only visible for the first experimental block following tDCS stimulation. Overall, the present data indicate that boosting the rIFG may be useful to enhance inhibitory skills, but that CAF could be of the greatest relevance to monitor the temporal dynamics of the neuromodulation effect.

  14. Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions

    ERIC Educational Resources Information Center

    Yormaz, Seha; Sünbül, Önder

    2017-01-01

    This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…

  15. Particle Tracking on the BNL Relativistic Heavy Ion Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell, G. F.

    1986-08-07

    Tracking studies including the effects of random multipole errors as well as the effects of random and systematic multipole errors have been made for RHIC. Initial results for operating at an off diagonal working point are discussed.

  16. Controller design for global fixed-time synchronization of delayed neural networks with discontinuous activations.

    PubMed

    Wang, Leimin; Zeng, Zhigang; Hu, Junhao; Wang, Xiaoping

    2017-03-01

    This paper addresses the controller design problem for global fixed-time synchronization of delayed neural networks (DNNs) with discontinuous activations. To solve this problem, adaptive control and state feedback control laws are designed. Then based on the two controllers and two lemmas, the error system is proved to be globally asymptotically stable and even fixed-time stable. Moreover, some sufficient and easy checked conditions are derived to guarantee the global synchronization of drive and response systems in fixed time. It is noted that the settling time functional for fixed-time synchronization is independent on initial conditions. Our fixed-time synchronization results contain the finite-time results as the special cases by choosing different values of the two controllers. Finally, theoretical results are supported by numerical simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Automated Classification of Phonological Errors in Aphasic Language

    PubMed Central

    Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.

    1984-01-01

    Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.

  18. A Proposal for Automatic Fruit Harvesting by Combining a Low Cost Stereovision Camera and a Robotic Arm

    PubMed Central

    Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Runcan, David; Moreno, Javier; Martínez, Dani; Teixidó, Mercè; Palacín, Jordi

    2014-01-01

    This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions. PMID:24984059

  19. A proposal for automatic fruit harvesting by combining a low cost stereovision camera and a robotic arm.

    PubMed

    Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Runcan, David; Moreno, Javier; Martínez, Dani; Teixidó, Mercè; Palacín, Jordi

    2014-06-30

    This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions.

  20. A compressed sensing based approach on Discrete Algebraic Reconstruction Technique.

    PubMed

    Demircan-Tureyen, Ezgi; Kamasak, Mustafa E

    2015-01-01

    Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.

  1. Cone-Beam CT Assessment of Interfraction and Intrafraction Setup Error of Two Head-and-Neck Cancer Thermoplastic Masks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velec, Michael; Waldron, John N.; O'Sullivan, Brian

    2010-03-01

    Purpose: To prospectively compare setup error in standard thermoplastic masks and skin-sparing masks (SSMs) modified with low neck cutouts for head-and-neck intensity-modulated radiation therapy (IMRT) patients. Methods and Materials: Twenty head-and-neck IMRT patients were randomized to be treated in a standard mask (SM) or SSM. Cone-beam computed tomography (CBCT) scans, acquired daily after both initial setup and any repositioning, were used for initial and residual interfraction evaluation, respectively. Weekly, post-IMRT CBCT scans were acquired for intrafraction setup evaluation. The population random (sigma) and systematic (SIGMA) errors were compared for SMs and SSMs. Skin toxicity was recorded weekly by use ofmore » Radiation Therapy Oncology Group criteria. Results: We evaluated 762 CBCT scans in 11 patients randomized to the SM and 9 to the SSM. Initial interfraction sigma was 1.6 mm or less or 1.1 deg. or less for SM and 2.0 mm or less and 0.8 deg. for SSM. Initial interfraction SIGMA was 1.0 mm or less or 1.4 deg. or less for SM and 1.1 mm or less or 0.9 deg. or less for SSM. These errors were reduced before IMRT with CBCT image guidance with no significant differences in residual interfraction or intrafraction uncertainties between SMs and SSMs. Intrafraction sigma and SIGMA were less than 1 mm and less than 1 deg. for both masks. Less severe skin reactions were observed in the cutout regions of the SSM compared with non-cutout regions. Conclusions: Interfraction and intrafraction setup error is not significantly different for SSMs and conventional masks in head-and-neck radiation therapy. Mask cutouts should be considered for these patients in an effort to reduce skin toxicity.« less

  2. Direct measurement of the poliovirus RNA polymerase error frequency in vitro

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B.

    1988-02-01

    The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. Amore » fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.« less

  3. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  4. CREKID: A computer code for transient, gas-phase combustion of kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1984-01-01

    A new algorithm was developed for fast, automatic integration of chemical kinetic rate equations describing homogeneous, gas-phase combustion at constant pressure. Particular attention is paid to the distinguishing physical and computational characteristics of the induction, heat-release and equilibration regimes. The two-part predictor-corrector algorithm, based on an exponentially-fitted trapezoidal rule, includes filtering of ill-posed initial conditions, automatic selection of Newton-Jacobi or Newton iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm was found to compare favorably with LSODE on two representative test problems drawn from combustion kinetics.

  5. The Effects of Discrete-Trial Training Commission Errors on Learner Outcomes: An Extension

    ERIC Educational Resources Information Center

    Jenkins, Sarah R.; Hirst, Jason M.; DiGennaro Reed, Florence D.

    2015-01-01

    We conducted a parametric analysis of treatment integrity errors during discrete-trial training and investigated the effects of three integrity conditions (0, 50, or 100 % errors of commission) on performance in the presence and absence of programmed errors. The presence of commission errors impaired acquisition for three of four participants.…

  6. WE-AB-207A-02: John’s Equation Based Consistency Condition and Incomplete Projection Restoration Upon Circular Orbit CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, J; Qi, H; Wu, S

    Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method ismore » proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections, especially for their high frequency component.« less

  7. Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Hunsberger, Randolph J

    This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less

  8. A Modified MinMax k-Means Algorithm Based on PSO.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    The MinMax k -means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k -means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k -means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k -means algorithm and the original MinMax k -means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.

  9. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  10. A Method of Implementing Cutoff Conditions for Saturn V Lunar Missions Out of Earth Parking Orbit Assuming a Continuous Ground Launch Window

    NASA Technical Reports Server (NTRS)

    Cooper, F. D.

    1965-01-01

    A method of implementing Saturn V lunar missions from an earth parking orbit is presented. The ground launch window is assumed continuous over a four and one-half hour period. The iterative guidance scheme combined with a set of auxiliary equations that define suitable S-IVB cutoff conditions, is the approach taken. The four inputs to the equations that define cutoff conditions are represented as simple third-degree polynomials as a function of ignition time. Errors at lunar arrival caused by the separate and combined effects of the guidance equations, cutoff conditions, hypersurface errors, and input representations are shown. Vehicle performance variations and parking orbit injection errors are included as perturbations. Appendix I explains how aim vectors were computed for the cutoff equations. Appendix II presents all guidance equations and related implementation procedures. Appendix III gives the derivation of the auxiliary cutoff equations. No error at lunar arrival was large enough to require a midcourse correction greater than one meter per second assuming a transfer time of three days and the midcourse correction occurs five hours after injection. Since this result is insignificant when compared to expected hardware errors, the implementation procedures presented are adequate to define cutoff conditions for Saturn V lunar missions.

  11. Dissociable contribution of the parietal and frontal cortex to coding movement direction and amplitude

    PubMed Central

    Davare, Marco; Zénon, Alexandre; Desmurget, Michel; Olivier, Etienne

    2015-01-01

    To reach for an object, we must convert its spatial location into an appropriate motor command, merging movement direction and amplitude. In humans, it has been suggested that this visuo-motor transformation occurs in a dorsomedial parieto-frontal pathway, although the causal contribution of the areas constituting the “reaching circuit” remains unknown. Here we used transcranial magnetic stimulation (TMS) in healthy volunteers to disrupt the function of either the medial intraparietal area (mIPS) or dorsal premotor cortex (PMd), in each hemisphere. The task consisted in performing step-tracking movements with the right wrist towards targets located in different directions and eccentricities; targets were either visible for the whole trial (Target-ON) or flashed for 200 ms (Target-OFF). Left and right mIPS disruption led to errors in the initial direction of movements performed towards contralateral targets. These errors were corrected online in the Target-ON condition but when the target was flashed for 200 ms, mIPS TMS manifested as a larger endpoint spreading. In contrast, left PMd virtual lesions led to higher acceleration and velocity peaks—two parameters typically used to probe the planned movement amplitude—irrespective of the target position, hemifield and presentation condition; in the Target-OFF condition, left PMd TMS induced overshooting and increased the endpoint dispersion along the axis of the target direction. These results indicate that left PMd intervenes in coding amplitude during movement preparation. The critical TMS timings leading to errors in direction and amplitude were different, namely 160–100 ms before movement onset for mIPS and 100–40 ms for left PMd. TMS applied over right PMd had no significant effect. These results demonstrate that, during motor preparation, direction and amplitude of goal-directed movements are processed by different cortical areas, at distinct timings, and according to a specific hemispheric organization. PMID:25999837

  12. Medical Errors Reduction Initiative

    DTIC Science & Technology

    2009-03-01

    enough data was collected to have any statistical significance or determine impact on latent error in the process of blood transfusion. Bedside...of adverse drug events. JAMA 1995; 274: 35-43 . Leape, L.L., Brennan, T .A., & Laird, N .M. ( 1991) The nature of adverse events in hospitalized...Background Medical errors are a significant cause of morbidity and mortality among hospitalized patients (Kohn, Corrigan and Donaldson, 2000; Leape, Brennan

  13. The Weighted-Average Lagged Ensemble.

    PubMed

    DelSole, T; Trenary, L; Tippett, M K

    2017-11-01

    A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.

  14. Ability of the current global observing network to constrain N2O sources and sinks

    NASA Astrophysics Data System (ADS)

    Millet, D. B.; Wells, K. C.; Chaliyakunnel, S.; Griffis, T. J.; Henze, D. K.; Bousserez, N.

    2014-12-01

    The global observing network for atmospheric N2O combines flask and in-situ measurements at ground stations with sustained and campaign-based aircraft observations. In this talk we apply a new global model of N2O (based on GEOS-Chem) and its adjoint to assess the strengths and weaknesses of this network for quantifying N2O emissions. We employ an ensemble of pseudo-observation analyses to evaluate the relative constraints provided by ground-based (surface, tall tower) and airborne (HIPPO, CARIBIC) observations, and the extent to which variability (e.g. associated with pulsing or seasonality of emissions) not captured by the a priori inventory can bias the inferred fluxes. We find that the ground-based and HIPPO datasets each provide a stronger constraint on the distribution of global emissions than does the CARIBIC dataset on its own. Given appropriate initial conditions, we find that our inferred surface fluxes are insensitive to model errors in the stratospheric loss rate of N2O over the timescale of our analysis (2 years); however, the same is not necessarily true for model errors in stratosphere-troposphere exchange. Finally, we examine the a posteriori error reduction distribution to identify priority locations for future N2O measurements.

  15. Magnetometer-only attitude and angular velocity filtering estimation for attitude changing spacecraft

    NASA Astrophysics Data System (ADS)

    Ma, Hongliang; Xu, Shijie

    2014-09-01

    This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.

  16. Quantifying model uncertainty in seasonal Arctic sea-ice forecasts

    NASA Astrophysics Data System (ADS)

    Blanchard-Wrigglesworth, Edward; Barthélemy, Antoine; Chevallier, Matthieu; Cullather, Richard; Fučkar, Neven; Massonnet, François; Posey, Pamela; Wang, Wanqiu; Zhang, Jinlun; Ardilouze, Constantin; Bitz, Cecilia; Vernieres, Guillaume; Wallcraft, Alan; Wang, Muyin

    2017-04-01

    Dynamical model forecasts in the Sea Ice Outlook (SIO) of September Arctic sea-ice extent over the last decade have shown lower skill than that found in both idealized model experiments and hindcasts of previous decades. Additionally, it is unclear how different model physics, initial conditions or post-processing techniques contribute to SIO forecast uncertainty. In this work, we have produced a seasonal forecast of 2015 Arctic summer sea ice using SIO dynamical models initialized with identical sea-ice thickness in the central Arctic. Our goals are to calculate the relative contribution of model uncertainty and irreducible error growth to forecast uncertainty and assess the importance of post-processing, and to contrast pan-Arctic forecast uncertainty with regional forecast uncertainty. We find that prior to forecast post-processing, model uncertainty is the main contributor to forecast uncertainty, whereas after forecast post-processing forecast uncertainty is reduced overall, model uncertainty is reduced by an order of magnitude, and irreducible error growth becomes the main contributor to forecast uncertainty. While all models generally agree in their post-processed forecasts of September sea-ice volume and extent, this is not the case for sea-ice concentration. Additionally, forecast uncertainty of sea-ice thickness grows at a much higher rate along Arctic coastlines relative to the central Arctic ocean. Potential ways of offering spatial forecast information based on the timescale over which the forecast signal beats the noise are also explored.

  17. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  19. Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.

    PubMed

    Chen, Jing; Zhang, Yi; Xue, Wei

    2018-04-28

    In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.

  20. Factors affecting the simulated trajectory and intensification of Tropical Cyclone Yasi (2011)

    NASA Astrophysics Data System (ADS)

    Parker, Chelsea L.; Lynch, Amanda H.; Mooney, Priscilla A.

    2017-09-01

    This study investigates the sensitivity of the simulated trajectory, intensification, and forward speed of Tropical Cyclone Yasi to initial conditions, physical parameterizations, and sea surface temperatures. Yasi was a category 5 storm that made landfall in Queensland, Australia in February 2011. A series of simulations were performed using WRF-ARW v3.4.1 driven by ERA-Interim data at the lateral boundaries. To assess these simulations, a new simple skill score is devised to summarize the deviation from observed conditions at landfall. The results demonstrate the sensitivity to initial condition resolution and the need for a new initialization dataset. Ensemble testing of physics parameterizations revealed strong sensitivity to cumulus schemes, with a trade-off between trajectory and intensity accuracy. The Tiedtke scheme produces an accurate trajectory evolution and landfall location. The Kain Fritch scheme is associated with larger errors in trajectory due to a less active shallow convection over the ocean, leading to warmer temperatures at the 700 mb level and a stronger, more poleward steering flow. However, the Kain Fritsch scheme produces more accurate intensities and translation speeds. Tiedtke-derived intensities were weaker due to suppression of deep convection by active shallow convection. Accurate representation of the sea surface temperature through correcting a newly discovered SST lag in reanalysis data or increasing resolution of SST data can improve the simulation. Higher resolution increases relative vorticity and intensity. However, the sea surface boundary had a more pronounced effect on the simulation with the Tiedtke scheme due to its moisture convergence trigger and active shallow convection over the tropical ocean.

  1. Driver-initiated distractions: examining strategic adaptation for in-vehicle task initiation.

    PubMed

    Horrey, William J; Lesch, Mary F

    2009-01-01

    Today, drivers are faced with many in-vehicle activities that are potentially distracting. In many cases, they are not passive recipients of these tasks; rather, drivers decide whether or not (or how) to perform them. In this study, we examined whether drivers, given knowledge of the upcoming road demands, would strategically delay performing in-vehicle activities until demands were reduced. Twenty drivers drove an instrumented van around a closed track that was divided into sections of varying demands and difficulty. Drivers were asked to perform one of four in-vehicle tasks (e.g., phone conversation; read a text message; find an address; pick up an object on the floor); however, they were free to decide when to initiate these tasks, provided they finish them before a given deadline. Although drivers were fully aware of the relative demands of the road, they did not tend to strategically postpone tasks--a finding that was consistent across the different tasks (p >.05). Rather, drivers tended to initiate tasks regardless of the current driving conditions. This strategy frequently led to driving errors. Given the control that drivers have over many in-vehicle distractions, interventions that focus on strategic decisions and planning may have merit.

  2. Conditional Standard Errors of Measurement for Scale Scores.

    ERIC Educational Resources Information Center

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  3. Residual Stress Measurement and Calibration for A7N01 Aluminum Alloy Welded Joints by Using Longitudinal Critically Refracted ( LCR) Wave Transmission Method

    NASA Astrophysics Data System (ADS)

    Zhu, Qimeng; Chen, Jia; Gou, Guoqing; Chen, Hui; Li, Peng; Gao, W.

    2016-10-01

    Residual stress measurement and control are highly important for the safety of structures of high-speed trains, which is critical for the structure design. The longitudinal critically refracted wave technology is the most widely used method in measuring residual stress with ultrasonic method, but its accuracy is strongly related to the test parameters, namely the flight time at the free-stress condition ( t 0), stress coefficient ( K), and initial stress (σ0) of the measured materials. The difference of microstructure in the weld zone, heat affected zone, and base metal (BM) results in the divergence of experimental parameters. However, the majority of researchers use the BM parameters to determine the residual stress in other zones and ignore the initial stress (σ0) in calibration samples. Therefore, the measured residual stress in different zones is often high in errors and may result in the miscalculation of the safe design of important structures. A serious problem in the ultrasonic estimation of residual stresses requires separation between the microstructure and the acoustoelastic effects. In this paper, the effects of initial stress and microstructure on stress coefficient K and flight time t 0 at free-stress conditions have been studied. The residual stress with or without different corrections was investigated. The results indicated that the residual stresses obtained with correction are more accurate for structure design.

  4. Position sense at the human elbow joint measured by arm matching or pointing.

    PubMed

    Tsay, Anthony; Allen, Trevor J; Proske, Uwe

    2016-10-01

    Position sense at the human elbow joint has traditionally been measured in blindfolded subjects using a forearm matching task. Here we compare position errors in a matching task with errors generated when the subject uses a pointer to indicate the position of a hidden arm. Evidence from muscle vibration during forearm matching supports a role for muscle spindles in position sense. We have recently shown using vibration, as well as muscle conditioning, which takes advantage of muscle's thixotropic property, that position errors generated in a forearm pointing task were not consistent with a role by muscle spindles. In the present study we have used a form of muscle conditioning, where elbow muscles are co-contracted at the test angle, to further explore differences in position sense measured by matching and pointing. For fourteen subjects, in a matching task where the reference arm had elbow flexor and extensor muscles contracted at the test angle and the indicator arm had its flexors conditioned at 90°, matching errors lay in the direction of flexion by 6.2°. After the same conditioning of the reference arm and extension conditioning of the indicator at 0°, matching errors lay in the direction of extension (5.7°). These errors were consistent with predictions based on a role by muscle spindles in determining forearm matching outcomes. In the pointing task subjects moved a pointer to align it with the perceived position of the hidden arm. After conditioning of the reference arm as before, pointing errors all lay in a more extended direction than the actual position of the arm by 2.9°-7.3°, a distribution not consistent with a role by muscle spindles. We propose that in pointing muscle spindles do not play the major role in signalling limb position that they do in matching, but that other sources of sensory input should be given consideration, including afferents from skin and joint.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbee, D; McCarthy, A; Galavis, P

    Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less

  6. Left neglect dyslexia and the effect of stimulus duration.

    PubMed

    Arduino, Lisa S; Vallar, Giuseppe; Burani, Cristina

    2006-01-01

    The present study investigated the effects of the duration of the stimulus on the reading performance of right-brain-damaged patients with left neglect dyslexia. Three Italian patients read aloud words and nonwords, under conditions of unlimited time of stimulus exposure and of timed presentation. In the untimed condition, the majority of the patients' errors involved the left side of the letter string (i.e., neglect dyslexia errors). Conversely, in the timed condition, although the overall level of performance decreased, errors were more evenly distributed across the whole letter string (i.e., visual - nonlateralized - errors). This reduction of neglect errors with a reduced time of presentation of the stimulus may reflect the read out of elements of the letter string from a preserved visual storage component, such as iconic memory. Conversely, a time-unlimited presentation of the stimulus may bring about the rightward bias that characterizes the performance of neglect patients, possibly by a capture of the patients' attention by the final (rightward) letters of the string.

  7. Structural interpretation in composite systems using powder X-ray diffraction: applications of error propagation to the pair distribution function.

    PubMed

    Moore, Michael D; Shi, Zhenqi; Wildfong, Peter L D

    2010-12-01

    To develop a method for drawing statistical inferences from differences between multiple experimental pair distribution function (PDF) transforms of powder X-ray diffraction (PXRD) data. The appropriate treatment of initial PXRD error estimates using traditional error propagation algorithms was tested using Monte Carlo simulations on amorphous ketoconazole. An amorphous felodipine:polyvinyl pyrrolidone:vinyl acetate (PVPva) physical mixture was prepared to define an error threshold. Co-solidified products of felodipine:PVPva and terfenadine:PVPva were prepared using a melt-quench method and subsequently analyzed using PXRD and PDF. Differential scanning calorimetry (DSC) was used as an additional characterization method. The appropriate manipulation of initial PXRD error estimates through the PDF transform were confirmed using the Monte Carlo simulations for amorphous ketoconazole. The felodipine:PVPva physical mixture PDF analysis determined ±3σ to be an appropriate error threshold. Using the PDF and error propagation principles, the felodipine:PVPva co-solidified product was determined to be completely miscible, and the terfenadine:PVPva co-solidified product, although having appearances of an amorphous molecular solid dispersion by DSC, was determined to be phase-separated. Statistically based inferences were successfully drawn from PDF transforms of PXRD patterns obtained from composite systems. The principles applied herein may be universally adapted to many different systems and provide a fundamentally sound basis for drawing structural conclusions from PDF studies.

  8. Error reporting in transfusion medicine at a tertiary care centre: a patient safety initiative.

    PubMed

    Elhence, Priti; Shenoy, Veena; Verma, Anupam; Sachan, Deepti

    2012-11-01

    Errors in the transfusion process can compromise patient safety. A study was undertaken at our center to identify the errors in the transfusion process and their causes in order to reduce their occurrence by corrective and preventive actions. All near miss, no harm events and adverse events reported in the 'transfusion process' during 1 year study period were recorded, classified and analyzed at a tertiary care teaching hospital in North India. In total, 285 transfusion related events were reported during the study period. Of these, there were four adverse (1.5%), 10 no harm (3.5%) and 271 (95%) near miss events. Incorrect blood component transfusion rate was 1 in 6031 component units. ABO incompatible transfusion rate was one in 15,077 component units issued or one in 26,200 PRBC units issued and acute hemolytic transfusion reaction due to ABO incompatible transfusion was 1 in 60,309 component units issued. Fifty-three percent of the antecedent near miss events were bedside events. Patient sample handling errors were the single largest category of errors (n=94, 33%) followed by errors in labeling and blood component handling and storage in user areas. The actual and near miss event data obtained through this initiative provided us with clear evidence about latent defects and critical points in the transfusion process so that corrective and preventive actions could be taken to reduce errors and improve transfusion safety.

  9. Effects of Error Experience When Learning to Simulate Hypernasality

    ERIC Educational Resources Information Center

    Wong, Andus W.-K.; Tse, Andy C.-Y.; Ma, Estella P.-M.; Whitehill, Tara L.; Masters, Rich S. W.

    2013-01-01

    Purpose: The purpose of this study was to evaluate the effects of error experience on the acquisition of hypernasal speech. Method: Twenty-eight healthy participants were asked to simulate hypernasality in either an "errorless learning" condition (in which the possibility for errors was limited) or an "errorful learning"…

  10. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  11. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle.

    PubMed

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  12. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle

    NASA Astrophysics Data System (ADS)

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  13. Design and simulation of sensor networks for tracking Wifi users in outdoor urban environments

    NASA Astrophysics Data System (ADS)

    Thron, Christopher; Tran, Khoi; Smith, Douglas; Benincasa, Daniel

    2017-05-01

    We present a proof-of-concept investigation into the use of sensor networks for tracking of WiFi users in outdoor urban environments. Sensors are fixed, and are capable of measuring signal power from users' WiFi devices. We derive a maximum likelihood estimate for user location based on instantaneous sensor power measurements. The algorithm takes into account the effects of power control, and is self-calibrating in that the signal power model used by the location algorithm is adjusted and improved as part of the operation of the network. Simulation results to verify the system's performance are presented. The simulation scenario is based on a 1.5 km2 area of lower Manhattan, The self-calibration mechanism was verified for initial rms (root mean square) errors of up to 12 dB in the channel power estimates: rms errors were reduced by over 60% in 300 track-hours, in systems with limited power control. Under typical operating conditions with (without) power control, location rms errors are about 8.5 (5) meters with 90% accuracy within 9 (13) meters, for both pedestrian and vehicular users. The distance error distributions for smaller distances (<30 m) are well-approximated by an exponential distribution, while the distributions for large distance errors have fat tails. The issue of optimal sensor placement in the sensor network is also addressed. We specify a linear programming algorithm for determining sensor placement for networks with reduced number of sensors. In our test case, the algorithm produces a network with 18.5% fewer sensors with comparable accuracy estimation performance. Finally, we discuss future research directions for improving the accuracy and capabilities of sensor network systems in urban environments.

  14. Short Haul Civil Tiltrotor Contingency Power System Preliminary Design

    NASA Technical Reports Server (NTRS)

    Eames, David J. H.

    2006-01-01

    Single Langmuir probe measurements are presented over a two-dimensional array of locations in the near Discharge Cathode Assembly (DCA) region of a 30-cm diameter ring cusp ion thruster over a range of thruster operating conditions encompassing the high-power half of the NASA throttling table. The Langmuir probe data were analyzed with two separate methods. All data were analyzed initially assuming an electron population consisting of Maxwellian electrons only. The on-axis data were then analyzed assuming both Maxwellian and primary electrons. Discharge plasma data taken with beam extraction exhibit a broadening of the higher electron temperature plume boundary compared to similar discharge conditions without beam extraction. The opposite effect is evident with the electron/ion number density as the data without began, extraction appears to be more collimated than the corresponding data with beam extraction. Primary electron energy and number densities are presented for one operating condition giving an order of magnitude of their value and the error associated with this calculation.

  15. Discharge Chamber Plasma Structure of a 30-cm NSTAR-Type Ion Engine

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.; Gallimore, Alec D.

    2006-01-01

    Single Langmuir probe measurements are presented over a two-dimensional array of locations in the near Discharge Cathode Assembly (DCA) region of a 30-cm diameter ring cusp ion thruster over a range of thruster operating conditions encompassing the high-power half of the NASA throttling table. The Langmuir probe data were analyzed with two separate methods. All data were analyzed initially assuming an electron population consisting of Maxwellian electrons only. The on-axis data were then analyzed assuming both Maxwellian and primary electrons. Discharge plasma data taken with beam extraction exhibit a broadening of the higher electron temperature plume boundary compared to similar discharge conditions without beam extraction. The opposite effect is evident with the electron/ion number density as the data without began, extraction appears to be more collimated than the corresponding data with beam extraction. Primary electron energy and number densities are presented for one operating condition giving an order of magnitude of their value and the error associated with this calculation.

  16. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  17. Embracing Errors in Simulation-Based Training: The Effect of Error Training on Retention and Transfer of Central Venous Catheter Skills.

    PubMed

    Gardner, Aimee K; Abdelfattah, Kareem; Wiersch, John; Ahmed, Rami A; Willis, Ross E

    2015-01-01

    Error management training is an approach that encourages exposure to errors during initial skill acquisition so that learners can be equipped with important error identification, management, and metacognitive skills. The purpose of this study was to determine how an error-focused training program affected performance, retention, and transfer of central venous catheter (CVC) placement skills when compared with traditional training methodologies. Surgical interns (N = 30) participated in a 1-hour session featuring an instructional video and practice performing internal jugular (IJ) and subclavian (SC) CVC placement with guided instruction. All interns underwent baseline knowledge and skill assessment for IJ and SC (pretest) CVC placement; watched a "correct-only" (CO) or "correct + error" (CE) instructional video; practiced for 30 minutes; and were posttested on knowledge and IJ and SC CVC placement. Skill retention and transfer (femoral CVC placement) were assessed 30 days later. All skills tests (pretest, posttest, and transfer) were videorecorded and deidentified for evaluation by a single blinded instructor using a validated 17-item checklist. Both the groups exhibited significant improvements (p < 0.001) in knowledge and skills after the 1-hour training program, but the increase of items achieved on the performance checklist did not differ between conditions (CO: IJ Δ = 35%, SC Δ = 29%; CE: IJ Δ = 36%, subclavian Δ = 33%). However, 1 month later, the CO group exhibited significant declines in skill retention on IJ CVC placement (from 68% at posttraining to 44% at day 30; p < 0.05) and SC CVC placement (from 63% at posttraining to 49% at day 30; p < 0.05), whereas the CE group did not have significant decreases in performance. The CE group performed significantly better on femoral CVC placement (i.e., transfer task; 62% vs 38%; p < 0.01) and on 2 of the 3 complication scenarios (p < 0.05) when compared with the CO group. These data indicate that incorporating error-based activities and discussions into training programs can be beneficial for skill retention and transfer. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  18. Measurement of the Errors of Service Altimeter Installations During Landing-Approach and Take-Off Operations

    NASA Technical Reports Server (NTRS)

    Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.

    1960-01-01

    The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.

  19. Research on the Error Characteristics of a 110 kV Optical Voltage Transformer under Three Conditions: In the Laboratory, Off-Line in the Field and During On-Line Operation.

    PubMed

    Xiao, Xia; Hu, Haoliang; Xu, Yan; Lei, Min; Xiong, Qianzhu

    2016-08-16

    Optical voltage transformers (OVTs) have been applied in power systems. When performing accuracy performance tests of OVTs large differences exist between the electromagnetic environment and the temperature variation in the laboratory and on-site. Therefore, OVTs may display different error characteristics under different conditions. In this paper, OVT prototypes with typical structures were selected to be tested for the error characteristics with the same testing equipment and testing method. The basic accuracy, the additional error caused by temperature and the adjacent phase in the laboratory, the accuracy in the field off-line, and the real-time monitoring error during on-line operation were tested. The error characteristics under the three conditions-laboratory, in the field off-line and during on-site operation-were compared and analyzed. The results showed that the effect of the transportation process, electromagnetic environment and the adjacent phase on the accuracy of OVTs could be ignored for level 0.2, but the error characteristics of OVTs are dependent on the environmental temperature and are sensitive to the temperature gradient. The temperature characteristics during on-line operation were significantly superior to those observed in the laboratory.

  20. Debiasing affective forecasting errors with targeted, but not representative, experience narratives.

    PubMed

    Shaffer, Victoria A; Focella, Elizabeth S; Scherer, Laura D; Zikmund-Fisher, Brian J

    2016-10-01

    To determine whether representative experience narratives (describing a range of possible experiences) or targeted experience narratives (targeting the direction of forecasting bias) can reduce affective forecasting errors, or errors in predictions of experiences. In Study 1, participants (N=366) were surveyed about their experiences with 10 common medical events. Those who had never experienced the event provided ratings of predicted discomfort and those who had experienced the event provided ratings of actual discomfort. Participants making predictions were randomly assigned to either the representative experience narrative condition or the control condition in which they made predictions without reading narratives. In Study 2, participants (N=196) were again surveyed about their experiences with these 10 medical events, but participants making predictions were randomly assigned to either the targeted experience narrative condition or the control condition. Affective forecasting errors were observed in both studies. These forecasting errors were reduced with the use of targeted experience narratives (Study 2) but not representative experience narratives (Study 1). Targeted, but not representative, narratives improved the accuracy of predicted discomfort. Public collections of patient experiences should favor stories that target affective forecasting biases over stories representing the range of possible experiences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    NASA Astrophysics Data System (ADS)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  2. Hysteresis compensation of the Prandtl-Ishlinskii model for piezoelectric actuators using modified particle swarm optimization with chaotic map.

    PubMed

    Long, Zhili; Wang, Rui; Fang, Jiwen; Dai, Xufei; Li, Zuohua

    2017-07-01

    Piezoelectric actuators invariably exhibit hysteresis nonlinearities that tend to become significant under the open-loop condition and could cause oscillations and errors in nanometer-positioning tasks. Chaotic map modified particle swarm optimization (MPSO) is proposed and implemented to identify the Prandtl-Ishlinskii model for piezoelectric actuators. Hysteresis compensation is attained through application of an inverse Prandtl-Ishlinskii model, in which the parameters are formulated based on the original model with chaotic map MPSO. To strengthen the diversity and improve the searching ergodicity of the swarm, an initial method of adaptive inertia weight based on a chaotic map is proposed. To compare and prove that the swarm's convergence occurs before stochastic initialization and to attain an optimal particle swarm optimization algorithm, the parameters of a proportional-integral-derivative controller are searched using self-tuning, and the simulated results are used to verify the search effectiveness of chaotic map MPSO. The results show that chaotic map MPSO is superior to its competitors for identifying the Prandtl-Ishlinskii model and that the inverse Prandtl-Ishlinskii model can provide hysteresis compensation under different conditions in a simple and effective manner.

  3. A open loop guidance architecture for navigationally robust on-orbit docking

    NASA Technical Reports Server (NTRS)

    Chern, Hung-Sheng

    1995-01-01

    The development of an open-hop guidance architecture is outlined for autonomous rendezvous and docking (AR&D) missions to determine whether the Global Positioning System (GPS) can be used in place of optical sensors for relative initial position determination of the chase vehicle. Feasible command trajectories for one, two, and three impulse AR&D maneuvers are determined using constrained trajectory optimization. Early AR&D command trajectory results suggest that docking accuracies are most sensitive to vertical position errors at the initial conduction of the chase vehicle. Thus, a feasible command trajectory is based on maximizing the size of the locus of initial vertical positions for which a fixed sequence of impulses will translate the chase vehicle into the target while satisfying docking accuracy requirements. Documented accuracies are used to determine whether relative GPS can achieve the vertical position error requirements of the impulsive command trajectories. Preliminary development of a thruster management system for the Cargo Transfer Vehicle (CTV) based on optimal throttle settings is presented to complete the guidance architecture. Results show that a guidance architecture based on a two impulse maneuvers generated the best performance in terms of initial position error and total velocity change for the chase vehicle.

  4. The early results of excimer laser photorefractive keratectomy for compound myopic astigmatism.

    PubMed

    Horgan, S E; Pearson, R V

    1996-01-01

    An excimer laser (VISX Twenty/Twenty Excimer Refractive System) was used to treat 51 eyes for myopia and astigmatism. Uncorrected pretreatment visual acuity was between 6/18 and 6/60 (log unit +0.45 to +1.0) in 59% and worse than 6/60 in 29%. The mean pretreatment spherical refractive error was -4.05 dioptre (range 1.25 to 13.25), and the mean pretreatment cylindrical error was -0.97 dioptre (range 0.25 to 4.00). Uncorrected visual acuity measured 6/6 or better (log unit 0.0 or less) in 80% at three months, and averaged 6/6 for all eyes at six months post-treatment, with 75% eyes obtaining 6/6 or better. The mean post-treatment spherical error decayed according to pre-treatment values, with a mean sphere of -0.20 dioptre for eyes initially less than -2.00 dioptre, -0.40 dioptre (for those between -2.25 and -3.00), -0.71 dioptre (for those between -4.25 and -5.00), and -1.15 dioptre for eyes initially above -6.25 dioptre. Vectored cylindrical correction exhibited response proportional to initial refraction, with a mean post-treatment cylinder of -1.83 dioptre for eyes formerly averaging -3.08 dioptre, -0.55 dioptre (eyes initially averaging -1.63 dioptre), and -0.51 dioptre (eyes initially averaging -0.67 dioptre). Vector analysis of post-treatment astigmatism showed 58% eyes exhibiting 51 or more degrees of axis shift, although 34% eyes remained within 20 degrees of their pretreatment axis. An effective reduction in spherocylindrical error was achieved with all eyes, although axis misalignment was a common event.

  5. Space shuttle launch era spacecraft injection errors and DSN initial acquisition

    NASA Technical Reports Server (NTRS)

    Khatib, A. R.; Berman, A. L.; Wackley, J. A.

    1981-01-01

    The initial acquisition of a spacecraft by the Deep Space Network (DSN) is a critical mission event. This results from the importance of rapidly evaluating the health and trajectory of a spacecraft in the event that immediate corrective action might be required. Further, the DSN initial acquisition is always complicated by the most extreme tracking rates of the mission. The DSN initial acquisition characteristics will change considerably in the upcoming space shuttle launch era. How given injection errors at spacecraft separation from the upper stage launch vehicle (carried into orbit by the space shuttle) impact the DSN initial acquisition, and how this information can be factored into injection accuracy requirements to be levied on the Space Transportation System (STS) is addressed. The approach developed begins with the DSN initial acquisition parameters, generates a covariance matrix, and maps this covariance matrix backward to the spacecraft injection, thereby greatly simplifying the task of levying accuracy requirements on the STS, by providing such requirements in a format both familiar and convenient to STS.

  6. 44 CFR 67.6 - Basis of appeal.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... absolute (except where mathematical or measurement error or changed physical conditions can be demonstrated... a mathematical or measurement error or changed physical conditions, then the specific source of the... registered professional engineer or licensed land surveyor, of the new data necessary for FEMA to conduct a...

  7. Medication Errors in Pediatric Anesthesia: A Report From the Wake Up Safe Quality Improvement Initiative.

    PubMed

    Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S

    2017-09-01

    Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings characterize the most common types of medication errors in pediatric anesthesia practice and provide guidance on future preventative strategies. Many of these errors will be almost entirely preventable with the use of prefilled medication syringes to avoid accidental ampule swap, bar-coding at the point of medication administration to prevent syringe swap and to confirm the proper dose, and 2-person checking of medication infusions for accuracy.

  8. Shared investment projects and forecasting errors: setting framework conditions for coordination and sequencing data quality activities.

    PubMed

    Leitner, Stephan; Brauneis, Alexander; Rausch, Alexandra

    2015-01-01

    In this paper, we investigate the impact of inaccurate forecasting on the coordination of distributed investment decisions. In particular, by setting up a computational multi-agent model of a stylized firm, we investigate the case of investment opportunities that are mutually carried out by organizational departments. The forecasts of concern pertain to the initial amount of money necessary to launch and operate an investment opportunity, to the expected intertemporal distribution of cash flows, and the departments' efficiency in operating the investment opportunity at hand. We propose a budget allocation mechanism for coordinating such distributed decisions The paper provides guidance on how to set framework conditions, in terms of the number of investment opportunities considered in one round of funding and the number of departments operating one investment opportunity, so that the coordination mechanism is highly robust to forecasting errors. Furthermore, we show that-in some setups-a certain extent of misforecasting is desirable from the firm's point of view as it supports the achievement of the corporate objective of value maximization. We then address the question of how to improve forecasting quality in the best possible way, and provide policy advice on how to sequence activities for improving forecasting quality so that the robustness of the coordination mechanism to errors increases in the best possible way. At the same time, we show that wrong decisions regarding the sequencing can lead to a decrease in robustness. Finally, we conduct a comprehensive sensitivity analysis and prove that-in particular for relatively good forecasters-most of our results are robust to changes in setting the parameters of our multi-agent simulation model.

  9. Shared Investment Projects and Forecasting Errors: Setting Framework Conditions for Coordination and Sequencing Data Quality Activities

    PubMed Central

    Leitner, Stephan; Brauneis, Alexander; Rausch, Alexandra

    2015-01-01

    In this paper, we investigate the impact of inaccurate forecasting on the coordination of distributed investment decisions. In particular, by setting up a computational multi-agent model of a stylized firm, we investigate the case of investment opportunities that are mutually carried out by organizational departments. The forecasts of concern pertain to the initial amount of money necessary to launch and operate an investment opportunity, to the expected intertemporal distribution of cash flows, and the departments’ efficiency in operating the investment opportunity at hand. We propose a budget allocation mechanism for coordinating such distributed decisions The paper provides guidance on how to set framework conditions, in terms of the number of investment opportunities considered in one round of funding and the number of departments operating one investment opportunity, so that the coordination mechanism is highly robust to forecasting errors. Furthermore, we show that—in some setups—a certain extent of misforecasting is desirable from the firm’s point of view as it supports the achievement of the corporate objective of value maximization. We then address the question of how to improve forecasting quality in the best possible way, and provide policy advice on how to sequence activities for improving forecasting quality so that the robustness of the coordination mechanism to errors increases in the best possible way. At the same time, we show that wrong decisions regarding the sequencing can lead to a decrease in robustness. Finally, we conduct a comprehensive sensitivity analysis and prove that—in particular for relatively good forecasters—most of our results are robust to changes in setting the parameters of our multi-agent simulation model. PMID:25803736

  10. DNA Damage Levels Determine Cyclobutyl Pyrimidine Dimer Repair Mechanisms in Alfalfa Seedlings.

    PubMed Central

    Quaite, F. E.; Takayanagi, S.; Ruffini, J.; Sutherland, J. C.; Sutherland, B. M.

    1994-01-01

    Ultraviolet radiation in sunlight damages DNA in plants, but little is understood about the types, lesion capacity, and coordination of repair pathways. We challenged intact alfalfa seedlings with UV doses that induced different initial levels of cyclobutyl pyrimidine dimers and measured repair by excision and photoreactivation. By using alkaline gel electrophoresis of nonradioactive DNAs treated with a cyclobutyl pyrimidine dimer-specific UV endonuclease, we quantitated ethidium-stained DNA by electronic imaging and calculated lesion frequencies from the number average molecular lengths. At low initial dimer frequencies (less than ~30 dimers per million bases), the seedlings used only photoreactivation to repair dimers; excision repair was not significant. At higher damage levels, both excision and photorepair contributed significantly. This strategy would allow plants with low damage levels to use error-free repair requiring only an external light energy source, whereas seedlings subjected to higher damage frequencies could call on additional repair processes requiring cellular energy. Characterization of repair in plants thus requires an investigation of a range of conditions, including the level of initial damage. PMID:12244228

  11. Attitude control with realization of linear error dynamics

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Bach, Ralph E.

    1993-01-01

    An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.

  12. MEADERS: Medication Errors and Adverse Drug Event Reporting system.

    PubMed

    Zafar, Atif

    2007-10-11

    The Agency for Healthcare Research and Quality (AHRQ) recently funded the PBRN Resource Center to develop a system for reporting ambulatory medication errors. Our goal was to develop a usable system that practices could use internally to track errors. We initially performed a comprehensive literature review of what is currently available. Then, using a combination of expert panel meetings and iterative development we designed an instrument for ambulatory medication error reporting and createad a reporting system based both in MS Access 2003 and on the web using MS ASP.NET 2.0 technologies.

  13. Impact of confidence number on accuracy of the SureSight Vision Screener.

    PubMed

    2010-02-01

    To assess the relation between the confidence number provided by the Welch Allyn SureSight Vision Screener and screening accuracy, and to determine whether repeated testing to achieve a higher confidence number improves screening accuracy in pre-school children. Lay and nurse screeners screened 1452 children enrolled in the Vision in Preschoolers (VIP) Phase II Study. All children also underwent a comprehensive eye examination. By using statistical comparison of proportions, we examined sensitivity and specificity for detecting any ocular condition targeted for detection in the VIP study and conditions grouped by severity and by type (amblyopia, strabismus, significant refractive error, and unexplained decreased visual acuity) among children who had confidence numbers < or =4 (retest necessary), 5 (retest if possible), > or =6 (acceptable). Among the 687 (47.3%) children who had repeated testing by either lay or nurse screeners because of a low confidence number (<6) for one or both eyes in the initial testing, the same analyses were also conducted to compare results between the initial reading and repeated test reading with the highest confidence number in the same child. These analyses were based on the failure criteria associated with 90% specificity for detecting any VIP condition in VIP Phase II. A lower confidence number category were associated with higher sensitivity (0.71, 0.65, and 0.59 for < or =4, 5, and > or =6, respectively, p = 0.04) but no statistical difference in specificity (0.85, 0.85, and 0.91, p = 0.07) of detecting any VIP-targeted condition. Children with any VIP-targeted condition were as likely to be detected using the initial confidence number reading as using the higher confidence number reading from repeated testing. A higher confidence number obtained during screening with the SureSight Vision Screener is not associated with better screening accuracy. Repeated testing to reach the manufacturer's recommended minimum value is not helpful in pre-school vision screening.

  14. An intravenous medication safety system: preventing high-risk medication errors at the point of care.

    PubMed

    Hatcher, Irene; Sullivan, Mark; Hutchinson, James; Thurman, Susan; Gaffney, F Andrew

    2004-10-01

    Improving medication safety at the point of care--particularly for high-risk drugs--is a major concern of nursing administrators. The medication errors most likely to cause harm are administration errors related to infusion of high-risk medications. An intravenous medication safety system is designed to prevent high-risk infusion medication errors and to capture continuous quality improvement data for best practice improvement. Initial testing with 50 systems in 2 units at Vanderbilt University Medical Center revealed that, even in the presence of a fully mature computerized prescriber order-entry system, the new safety system averted 99 potential infusion errors in 8 months.

  15. Multiple Cognitive Control Effects of Error Likelihood and Conflict

    PubMed Central

    Brown, Joshua W.

    2010-01-01

    Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873

  16. An Innovative Strategy for Accurate Thermal Compensation of Gyro Bias in Inertial Units by Exploiting a Novel Augmented Kalman Filter

    PubMed Central

    Angrisani, Leopoldo; Simone, Domenico De

    2018-01-01

    This paper presents an innovative model for integrating thermal compensation of gyro bias error into an augmented state Kalman filter. The developed model is applied in the Zero Velocity Update filter for inertial units manufactured by exploiting Micro Electro-Mechanical System (MEMS) gyros. It is used to remove residual bias at startup. It is a more effective alternative to traditional approach that is realized by cascading bias thermal correction by calibration and traditional Kalman filtering for bias tracking. This function is very useful when adopted gyros are manufactured using MEMS technology. These systems have significant limitations in terms of sensitivity to environmental conditions. They are characterized by a strong correlation of the systematic error with temperature variations. The traditional process is divided into two separated algorithms, i.e., calibration and filtering, and this aspect reduces system accuracy, reliability, and maintainability. This paper proposes an innovative Zero Velocity Update filter that just requires raw uncalibrated gyro data as input. It unifies in a single algorithm the two steps from the traditional approach. Therefore, it saves time and economic resources, simplifying the management of thermal correction process. In the paper, traditional and innovative Zero Velocity Update filters are described in detail, as well as the experimental data set used to test both methods. The performance of the two filters is compared both in nominal conditions and in the typical case of a residual initial alignment bias. In this last condition, the innovative solution shows significant improvements with respect to the traditional approach. This is the typical case of an aircraft or a car in parking conditions under solar input. PMID:29735956

  17. An Innovative Strategy for Accurate Thermal Compensation of Gyro Bias in Inertial Units by Exploiting a Novel Augmented Kalman Filter.

    PubMed

    Fontanella, Rita; Accardo, Domenico; Moriello, Rosario Schiano Lo; Angrisani, Leopoldo; Simone, Domenico De

    2018-05-07

    This paper presents an innovative model for integrating thermal compensation of gyro bias error into an augmented state Kalman filter. The developed model is applied in the Zero Velocity Update filter for inertial units manufactured by exploiting Micro Electro-Mechanical System (MEMS) gyros. It is used to remove residual bias at startup. It is a more effective alternative to traditional approach that is realized by cascading bias thermal correction by calibration and traditional Kalman filtering for bias tracking. This function is very useful when adopted gyros are manufactured using MEMS technology. These systems have significant limitations in terms of sensitivity to environmental conditions. They are characterized by a strong correlation of the systematic error with temperature variations. The traditional process is divided into two separated algorithms, i.e., calibration and filtering, and this aspect reduces system accuracy, reliability, and maintainability. This paper proposes an innovative Zero Velocity Update filter that just requires raw uncalibrated gyro data as input. It unifies in a single algorithm the two steps from the traditional approach. Therefore, it saves time and economic resources, simplifying the management of thermal correction process. In the paper, traditional and innovative Zero Velocity Update filters are described in detail, as well as the experimental data set used to test both methods. The performance of the two filters is compared both in nominal conditions and in the typical case of a residual initial alignment bias. In this last condition, the innovative solution shows significant improvements with respect to the traditional approach. This is the typical case of an aircraft or a car in parking conditions under solar input.

  18. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  19. An examination of the usefulness of repeat testing practices in a large hospital clinical chemistry laboratory.

    PubMed

    Deetz, Carl O; Nolan, Debra K; Scott, Mitchell G

    2012-01-01

    A long-standing practice in clinical laboratories has been to automatically repeat laboratory tests when values trigger automated "repeat rules" in the laboratory information system such as a critical test result. We examined 25,553 repeated laboratory values for 30 common chemistry tests from December 1, 2010, to February 28, 2011, to determine whether this practice is necessary and whether it may be possible to reduce repeat testing to improve efficiency and turnaround time for reporting critical values. An "error" was defined to occur when the difference between the initial and verified values exceeded the College of American Pathologists/Clinical Laboratory Improvement Amendments allowable error limit. The initial values from 2.6% of all repeated tests (668) were errors. Of these 668 errors, only 102 occurred for values within the analytic measurement range. Median delays in reporting critical values owing to repeated testing ranged from 5 (blood gases) to 17 (glucose) minutes.

  20. Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Munoz, Cesar A.

    2007-01-01

    This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.

  1. Solar Tracking Error Analysis of Fresnel Reflector

    PubMed Central

    Zheng, Jiantao; Yan, Junjie; Pei, Jie; Liu, Guanjie

    2014-01-01

    Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon. PMID:24895664

  2. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1987-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

  3. Effects of modeling errors on trajectory predictions in air traffic control automation

    NASA Technical Reports Server (NTRS)

    Jackson, Michael R. C.; Zhao, Yiyuan; Slattery, Rhonda

    1996-01-01

    Air traffic control automation synthesizes aircraft trajectories for the generation of advisories. Trajectory computation employs models of aircraft performances and weather conditions. In contrast, actual trajectories are flown in real aircraft under actual conditions. Since synthetic trajectories are used in landing scheduling and conflict probing, it is very important to understand the differences between computed trajectories and actual trajectories. This paper examines the effects of aircraft modeling errors on the accuracy of trajectory predictions in air traffic control automation. Three-dimensional point-mass aircraft equations of motion are assumed to be able to generate actual aircraft flight paths. Modeling errors are described as uncertain parameters or uncertain input functions. Pilot or autopilot feedback actions are expressed as equality constraints to satisfy control objectives. A typical trajectory is defined by a series of flight segments with different control objectives for each flight segment and conditions that define segment transitions. A constrained linearization approach is used to analyze trajectory differences caused by various modeling errors by developing a linear time varying system that describes the trajectory errors, with expressions to transfer the trajectory errors across moving segment transitions. A numerical example is presented for a complete commercial aircraft descent trajectory consisting of several flight segments.

  4. Can verbal instruction enhance the recall of an everyday task and promote error-monitoring in people with dementia of the Alzheimer-type?

    PubMed

    Balouch, Sara; Rusted, Jennifer M

    2017-03-01

    People with dementia of the Alzheimer-type (DAT) have difficulties with performing everyday tasks, and error awareness is poor. Here we investigate whether recall of actions and error monitoring in everyday task performance improved when they instructed another person on how to make tea. In this situation, both visual and motor cues are present, and attention is sustained by the requirement to keep instructing. The data were drawn from a longitudinal study recording performance in four participants with DAT, filmed regularly for five years in their own homes, completing three tea-making conditions: performed-recall (they made tea themselves); instructed-recall (they instructed the experimenter on how to make tea); and verbal-recall (they described how to make tea). Accomplishment scores (percentage of task they correctly recalled), errors and error-monitoring were coded. Task accomplishment was comparable in the performed-recall and instructed-recall conditions, but both were significantly better than task accomplishment in the verbal-recall condition. Third person instruction did not improve error-monitoring. This study has implications for everyday task rehabilitation for people with DAT.

  5. Driving range estimation for electric vehicles based on driving condition identification and forecast

    NASA Astrophysics Data System (ADS)

    Pan, Chaofeng; Dai, Wei; Chen, Liao; Chen, Long; Wang, Limei

    2017-10-01

    With the impact of serious environmental pollution in our cities combined with the ongoing depletion of oil resources, electric vehicles are becoming highly favored as means of transport. Not only for the advantage of low noise, but for their high energy efficiency and zero pollution. The Power battery is used as the energy source of electric vehicles. However, it does currently still have a few shortcomings, noticeably the low energy density, with high costs and short cycle life results in limited mileage compared with conventional passenger vehicles. There is great difference in vehicle energy consumption rate under different environment and driving conditions. Estimation error of current driving range is relatively large due to without considering the effects of environmental temperature and driving conditions. The development of a driving range estimation method will have a great impact on the electric vehicles. A new driving range estimation model based on the combination of driving cycle identification and prediction is proposed and investigated. This model can effectively eliminate mileage errors and has good convergence with added robustness. Initially the identification of the driving cycle is based on Kernel Principal Component feature parameters and fuzzy C referring to clustering algorithm. Secondly, a fuzzy rule between the characteristic parameters and energy consumption is established under MATLAB/Simulink environment. Furthermore the Markov algorithm and BP(Back Propagation) neural network method is utilized to predict the future driving conditions to improve the accuracy of the remaining range estimation. Finally, driving range estimation method is carried out under the ECE 15 condition by using the rotary drum test bench, and the experimental results are compared with the estimation results. Results now show that the proposed driving range estimation method can not only estimate the remaining mileage, but also eliminate the fluctuation of the residual range under different driving conditions.

  6. Commission errors of active intentions: the roles of aging, cognitive load, and practice.

    PubMed

    Boywitt, C Dennis; Rummel, Jan; Meiser, Thorsten

    2015-01-01

    Performing an intended action when it needs to be withheld, for example, when temporarily prescribed medication is incompatible with the other medication, is referred to as commission errors of prospective memory (PM). While recent research indicates that older adults are especially prone to commission errors for finished intentions, there is a lack of research on the effects of aging on commission errors for still active intentions. The present research investigates conditions which might contribute to older adults' propensity to perform planned intentions under inappropriate conditions. Specifically, disproportionally higher rates of commission errors for still active intentions were observed in older than in younger adults with both salient (Experiment 1) and non-salient (Experiment 2) target cues. Practicing the PM task in Experiment 2, however, helped execution of the intended action in terms of higher PM performance at faster ongoing-task response times but did not increase the rate of commission errors. The results have important implications for the understanding of older adults' PM commission errors and the processes involved in these errors.

  7. Effects of learning climate and registered nurse staffing on medication errors.

    PubMed

    Chang, Yunkyung; Mark, Barbara

    2011-01-01

    Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.

  8. Horizontal plane localization in single-sided deaf adults fitted with a bone-anchored hearing aid (Baha).

    PubMed

    Grantham, D Wesley; Ashmead, Daniel H; Haynes, David S; Hornsby, Benjamin W Y; Labadie, Robert F; Ricketts, Todd A

    2012-01-01

    : One purpose of this investigation was to evaluate the effect of a unilateral bone-anchored hearing aid (Baha) on horizontal plane localization performance in single-sided deaf adults who had either a conductive or sensorineural hearing loss in their impaired ear. The use of a 33-loudspeaker array allowed for a finer response measure than has previously been used to investigate localization in this population. In addition, a detailed analysis of error patterns allowed an evaluation of the contribution of random error and bias error to the total rms error computed in the various conditions studied. A second purpose was to investigate the effect of stimulus duration and head-turning on localization performance. : Two groups of single-sided deaf adults were tested in a localization task in which they had to identify the direction of a spoken phrase on each trial. One group had a sensorineural hearing loss (SNHL group; N = 7), and the other group had a conductive hearing loss (CHL group; N = 5). In addition, a control group of four normal-hearing adults was tested. The spoken phrase was either 1250 msec in duration (a male saying "Where am I coming from now?") or 341 msec in duration (the same male saying "Where?"). For the longer-duration phrase, subjects were tested in conditions in which they either were or were not allowed to move their heads before the termination of the phrase. The source came from one of nine positions in the front horizontal plane (from -79° to +79°). The response range included 33 choices (from -90° to +90°, separated by 5.6°). Subjects were tested in all stimulus conditions, both with and without the Baha device. Overall rms error was computed for each condition. Contributions of random error and bias error to the overall error were also computed. : There was considerable intersubject variability in all conditions. However, for the CHL group, the average overall error was significantly smaller when the Baha was on than when it was off. Further analysis of error patterns indicated that this improvement was primarily based on reduced response bias when the device was on; that is, the average response azimuth was nearer to the source azimuth when the device was on than when it was off. The SNHL group, on the other hand, had significantly greater overall error when the Baha was on than when it was off. Collapsed across listening conditions and groups, localization performance was significantly better with the 1250 msec stimulus than with the 341 msec stimulus. However, for the longer-duration stimulus, there was no significant beneficial effect of head-turning. Error scores in all conditions for both groups were considerably larger than those in the normal-hearing control group. : On average, single-sided deaf adults with CHL showed improved localization ability when using the Baha, whereas single-sided deaf adults with SNHL showed a decrement in performance when using the device. These results may have implications for clinical counseling for patients with unilateral hearing impairment.

  9. Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine.

    PubMed

    Okafor, Nnaemeka; Payne, Velma L; Chathampally, Yashwant; Miller, Sara; Doshi, Pratik; Singh, Hardeep

    2016-04-01

    Diagnostic errors are common in the emergency department (ED), but few studies have comprehensively evaluated their types and origins. We analysed incidents reported by ED physicians to determine disease conditions, contributory factors and patient harm associated with ED-related diagnostic errors. Between 1 March 2009 and 31 December 2013, ED physicians reported 509 incidents using a department-specific voluntary incident-reporting system that we implemented at two large academic hospital-affiliated EDs. For this study, we analysed 209 incidents related to diagnosis. A quality assurance team led by an ED physician champion reviewed each incident and interviewed physicians when necessary to confirm the presence/absence of diagnostic error and to determine the contributory factors. We generated descriptive statistics quantifying disease conditions involved, contributory factors and patient harm from errors. Among the 209 incidents, we identified 214 diagnostic errors associated with 65 unique diseases/conditions, including sepsis (9.6%), acute coronary syndrome (9.1%), fractures (8.6%) and vascular injuries (8.6%). Contributory factors included cognitive (n=317), system related (n=192) and non-remedial (n=106). Cognitive factors included faulty information verification (41.3%) and faulty information processing (30.6%) whereas system factors included high workload (34.4%) and inefficient ED processes (40.1%). Non-remediable factors included atypical presentation (31.3%) and the patients' inability to provide a history (31.3%). Most errors (75%) involved multiple factors. Major harm was associated with 34/209 (16.3%) of reported incidents. Most diagnostic errors in ED appeared to relate to common disease conditions. While sustaining diagnostic error reporting programmes might be challenging, our analysis reveals the potential value of such systems in identifying targets for improving patient safety in the ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  10. New experiments on the effect of clock shifts on homing in pigeons

    NASA Technical Reports Server (NTRS)

    Schmidt-Koenig, K.

    1972-01-01

    The effect of clock shifts as an experimental tool for predictably interfering with the homing ability of birds is discussed. Clock shifts introduce specific errors in the birds' sun azimuth compass, resulting in corresponding errors during initial orientation and possibly during orientation enroute. The effects of 6 hour and 12 hour clock shifts resulted in a 90 degree deviation and a 180 degree deviation from the initial orientation, respectively. The method for conducting the clock shift experiments and results obtained from previous experiments are described.

  11. The effect of unsuccessful retrieval on children's subsequent learning.

    PubMed

    Carneiro, Paula; Lapa, Ana; Finn, Bridgid

    2018-02-01

    It is well known that successful retrieval enhances subsequent adults' learning by promoting long-term retention. Recent research has also found benefits from unsuccessful retrieval, but the evidence is not as clear-cut when the participants are children. In this study, we employed a methodology based on guessing-the weak associate paradigm-to test whether children can learn from generated errors or whether errors are harmful for learning. We tested second- and third-grade children in Experiment 1 and tested preschool and kindergarten children in Experiment 2. With slight differences in the method, in both experiments children heard the experimenter saying one word (cue) and were asked to guess an associate word (guess condition) or to listen to the correspondent target-associated word (study condition), followed by corrective feedback in both conditions. At the end of the guessing phase, the children undertook a cued-recall task in which they were presented with each cue and were asked to say the corrected target. Together, the results showed that older children-those in kindergarten and early elementary school-benefited from unsuccessful retrieval. Older children showed more correct responses and fewer errors in the guess condition. In contrast, preschoolers produced similar levels of correct and error responses in the two conditions. In conclusion, generating errors seems to be beneficial for future learning of children older than 5years. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  13. Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback

    PubMed Central

    Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching

    2017-01-01

    Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658

  14. A two-phase sampling survey for nonresponse and its paradata to correct nonresponse bias in a health surveillance survey.

    PubMed

    Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A

    2017-02-01

    The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in correcting for nonresponse bias is questionable. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  15. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response.

    PubMed

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2014-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.

  16. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response

    PubMed Central

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2015-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058

  17. A Case Study of the Impact of AIRS Temperature Retrievals on Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Reale, O.; Atlas, R.; Jusem, J. C.

    2004-01-01

    Large errors in numerical weather prediction are often associated with explosive cyclogenesis. Most studes focus on the under-forecasting error, i.e. cases of rapidly developing cyclones which are poorly predicted in numerical models. However, the over-forecasting error (i.e., to predict an explosively developing cyclone which does not occur in reality) is a very common error that severely impacts the forecasting skill of all models and may also present economic costs if associated with operational forecasting. Unnecessary precautions taken by marine activities can result in severe economic loss. Moreover, frequent occurrence of over-forecasting can undermine the reliance on operational weather forecasting. Therefore, it is important to understand and reduce the prdctions of extreme weather associated with explosive cyclones which do not actually develop. In this study we choose a very prominent case of over-forecasting error in the northwestern Pacific. A 960 hPa cyclone develops in less than 24 hour in the 5-day forecast, with a deepening rate of about 30 hPa in one day. The cyclone is not versed in the analyses and is thus a case of severe over-forecasting. By assimilating AIRS data, the error is largely eliminated. By following the propagation of the anomaly that generates the spurious cyclone, it is found that a small mid-tropospheric geopotential height negative anomaly over the northern part of the Indian subcontinent in the initial conditions, propagates westward, is amplified by orography, and generates a very intense jet streak in the subtropical jet stream, with consequent explosive cyclogenesis over the Pacific. The AIRS assimilation eliminates this anomaly that may have been caused by erroneous upper-air data, and represents the jet stream more correctly. The energy associated with the jet is distributed over a much broader area and as a consequence a multiple, but much more moderate cyclogenesis is observed.

  18. A Design Study of the Inflated Sphere Landing Vehicle, Including the Landing Performance and the Effects of Deviations from Design Conditions

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1961-01-01

    The impact motion of the inflated sphere landing vehicle with a payload centrally supported from the spherical skin by numerous cords has been determined on the assumption of uniform isentropic gas compression during impact. The landing capabilities are determined for a system containing suspension cords of constant cross section. The effects of deviations in impact velocity and initial gas temperature from the design conditions are studied. Also discussed are the effects of errors in the time at which the skin is ruptured. These studies indicate how the design parameters should be chosen to insure reliability of the landing system. Calculations have been made and results are presented for a sphere inflated with hydrogen, landing on the moon in the absence of an atmosphere. The results are presented for one value of the skin-strength parameter.

  19. Prediction of body lipid change in pregnancy and lactation.

    PubMed

    Friggens, N C; Ingvartsen, K L; Emmans, G C

    2004-04-01

    A simple method to predict the genetically driven pattern of body lipid change through pregnancy and lactation in dairy cattle is proposed. The rationale and evidence for genetically driven body lipid change have their basis in evolutionary considerations and in the homeorhetic changes in lipid metabolism through the reproductive cycle. The inputs required to predict body lipid change are body lipid mass at calving (kg) and the date of conception (days in milk). Body lipid mass can be derived from body condition score and live weight. A key assumption is that there is a linear rate of change of the rate of body lipid change (dL/dt) between calving and a genetically determined time in lactation (T') at which a particular level of body lipid (L') is sought. A second assumption is that there is a linear rate of change of the rate of body lipid change (dL/dt) between T' and the next calving. The resulting model was evaluated using 2 sets of data. The first was from Holstein cows with 3 different levels of body fatness at calving. The second was from Jersey cows in first, second, and third parity. The model was found to reproduce the observed patterns of change in body lipid reserves through lactation in both data sets. The average error of prediction was low, less than the variation normally associated with the recording of condition score, and was similar for the 2 data sets. When the model was applied using the initially suggested parameter values derived from the literature the average error of prediction was 0.185 units of condition score (+/- 0.086 SD). After minor adjustments to the parameter values, the average error of prediction was 0.118 units of condition score (+/- 0.070 SD). The assumptions on which the model is based were sufficient to predict the changes in body lipid of both Holstein and Jersey cows under different nutritional conditions and parities. Thus, the model presented here shows that it is possible to predict genetically driven curves of body lipid change through lactation in a simple way that requires few parameters and inputs that can be derived in practice. It is expected that prediction of the cow's energy requirements can be substantially improved, particularly in early lactation, by incorporating a genetically driven body energy mobilization.

  20. Smooth Pursuit Eye Movement of Monkeys Naive to Laboratory Setups With Pictures and Artificial Stimuli.

    PubMed

    Botschko, Yehudit; Yarkoni, Merav; Joshua, Mati

    2018-01-01

    When animal behavior is studied in a laboratory environment, the animals are often extensively trained to shape their behavior. A crucial question is whether the behavior observed after training is part of the natural repertoire of the animal or represents an outlier in the animal's natural capabilities. This can be investigated by assessing the extent to which the target behavior is manifested during the initial stages of training and the time course of learning. We explored this issue by examining smooth pursuit eye movements in monkeys naïve to smooth pursuit tasks. We recorded the eye movements of monkeys from the 1st days of training on a step-ramp paradigm. We used bright spots, monkey pictures and scrambled versions of the pictures as moving targets. We found that during the initial stages of training, the pursuit initiation was largest for the monkey pictures and in some direction conditions close to target velocity. When the pursuit initiation was large, the monkeys mostly continued to track the target with smooth pursuit movements while correcting for displacement errors with small saccades. Two weeks of training increased the pursuit eye velocity in all stimulus conditions, whereas further extensive training enhanced pursuit slightly more. The training decreased the coefficient of variation of the eye velocity. Anisotropies that grade pursuit across directions were observed from the 1st day of training and mostly persisted across training. Thus, smooth pursuit in the step-ramp paradigm appears to be part of the natural repertoire of monkeys' behavior and training adjusts monkeys' natural predisposed behavior.

  1. A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.

    2016-02-01

    The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.

  2. Five-Year Progression of Refractive Errors and Incidence of Myopia in School-Aged Children in Western China

    PubMed Central

    Zhou, Wen-Jun; Zhang, Yong-Ye; Li, Hua; Wu, Yu-Fei; Xu, Ji; Lv, Sha; Li, Ge; Liu, Shi-Chun; Song, Sheng-Fang

    2016-01-01

    Background To determine the change in refractive error and the incidence of myopia among school-aged children in the Yongchuan District of Chongqing City, Western China. Methods A population-based cross-sectional survey was initially conducted in 2006 among 3070 children aged 6 to 15 years. A longitudinal follow-up study was then conducted 5 years later between November 2011 and March 2012. Refractive error was measured under cycloplegia with autorefraction. Age, sex, and baseline refractive error were evaluated as risk factors for progression of refractive error and incidence of myopia. Results Longitudinal data were available for 1858 children (60.5%). The cumulative mean change in refractive error was −2.21 (standard deviation [SD], 1.87) diopters (D) for the entire study population, with an annual progression of refraction in a myopic direction of −0.43 D. Myopic progression of refractive error was associated with younger age, female sex, and higher myopic or hyperopic refractive error at baseline. The cumulative incidence of myopia, defined as a spherical equivalent refractive error of −0.50 D or more, among initial emmetropes and hyperopes was 54.9% (95% confidence interval [CI], 45.2%–63.5%), with an annual incidence of 10.6% (95% CI, 8.7%–13.1%). Myopia was found more likely to happen in female and older children. Conclusions In Western China, both myopic progression and incidence of myopia were higher than those of children from most other locations in China and from the European Caucasian population. Compared with a previous study in China, there was a relative increase in annual myopia progression and annual myopia incidence, a finding which is consistent with the increasing trend on prevalence of myopia in China. PMID:26875599

  3. Five-Year Progression of Refractive Errors and Incidence of Myopia in School-Aged Children in Western China.

    PubMed

    Zhou, Wen-Jun; Zhang, Yong-Ye; Li, Hua; Wu, Yu-Fei; Xu, Ji; Lv, Sha; Li, Ge; Liu, Shi-Chun; Song, Sheng-Fang

    2016-07-05

    To determine the change in refractive error and the incidence of myopia among school-aged children in the Yongchuan District of Chongqing City, Western China. A population-based cross-sectional survey was initially conducted in 2006 among 3070 children aged 6 to 15 years. A longitudinal follow-up study was then conducted 5 years later between November 2011 and March 2012. Refractive error was measured under cycloplegia with autorefraction. Age, sex, and baseline refractive error were evaluated as risk factors for progression of refractive error and incidence of myopia. Longitudinal data were available for 1858 children (60.5%). The cumulative mean change in refractive error was -2.21 (standard deviation [SD], 1.87) diopters (D) for the entire study population, with an annual progression of refraction in a myopic direction of -0.43 D. Myopic progression of refractive error was associated with younger age, female sex, and higher myopic or hyperopic refractive error at baseline. The cumulative incidence of myopia, defined as a spherical equivalent refractive error of -0.50 D or more, among initial emmetropes and hyperopes was 54.9% (95% confidence interval [CI], 45.2%-63.5%), with an annual incidence of 10.6% (95% CI, 8.7%-13.1%). Myopia was found more likely to happen in female and older children. In Western China, both myopic progression and incidence of myopia were higher than those of children from most other locations in China and from the European Caucasian population. Compared with a previous study in China, there was a relative increase in annual myopia progression and annual myopia incidence, a finding which is consistent with the increasing trend on prevalence of myopia in China.

  4. Suboptimal schemes for atmospheric data assimilation based on the Kalman filter

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo; Cohn, Stephen E.

    1994-01-01

    This work is directed toward approximating the evolution of forecast error covariances for data assimilation. The performance of different algorithms based on simplification of the standard Kalman filter (KF) is studied. These are suboptimal schemes (SOSs) when compared to the KF, which is optimal for linear problems with known statistics. The SOSs considered here are several versions of optimal interpolation (OI), a scheme for height error variance advection, and a simplified KF in which the full height error covariance is advected. To employ a methodology for exact comparison among these schemes, a linear environment is maintained, in which a beta-plane shallow-water model linearized about a constant zonal flow is chosen for the test-bed dynamics. The results show that constructing dynamically balanced forecast error covariances rather than using conventional geostrophically balanced ones is essential for successful performance of any SOS. A posteriori initialization of SOSs to compensate for model - data imbalance sometimes results in poor performance. Instead, properly constructed dynamically balanced forecast error covariances eliminate the need for initialization. When the SOSs studied here make use of dynamically balanced forecast error covariances, the difference among their performances progresses naturally from conventional OI to the KF. In fact, the results suggest that even modest enhancements of OI, such as including an approximate dynamical equation for height error variances while leaving height error correlation structure homogeneous, go a long way toward achieving the performance of the KF, provided that dynamically balanced cross-covariances are constructed and that model errors are accounted for properly. The results indicate that such enhancements are necessary if unconventional data are to have a positive impact.

  5. Stabilizing Conditional Standard Errors of Measurement in Scale Score Transformations

    ERIC Educational Resources Information Center

    Moses, Tim; Kim, YoungKoung

    2017-01-01

    The focus of this article is on scale score transformations that can be used to stabilize conditional standard errors of measurement (CSEMs). Three transformations for stabilizing the estimated CSEMs are reviewed, including the traditional arcsine transformation, a recently developed general variance stabilization transformation, and a new method…

  6. A Modified MinMax k-Means Algorithm Based on PSO

    PubMed Central

    2016-01-01

    The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically. PMID:27656201

  7. On High-Order Radiation Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1995-01-01

    In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.

  8. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  9. Modulation of the error-related negativity by response conflict.

    PubMed

    Danielmeier, Claudia; Wessel, Jan R; Steinhauser, Marco; Ullsperger, Markus

    2009-11-01

    An arrow version of the Eriksen flanker task was employed to investigate the influence of conflict on the error-related negativity (ERN). The degree of conflict was modulated by varying the distance between flankers and the target arrow (CLOSE and FAR conditions). Error rates and reaction time data from a behavioral experiment were used to adapt a connectionist model of this task. This model was based on the conflict monitoring theory and simulated behavioral and event-related potential data. The computational model predicted an increased ERN amplitude in FAR incompatible (the low-conflict condition) compared to CLOSE incompatible errors (the high-conflict condition). A subsequent ERP experiment confirmed the model predictions. The computational model explains this finding with larger post-response conflict in far trials. In addition, data and model predictions of the N2 and the LRP support the conflict interpretation of the ERN.

  10. Decision-problem state analysis methodology

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1980-01-01

    A methodology for analyzing a decision-problem state is presented. The methodology is based on the analysis of an incident in terms of the set of decision-problem conditions encountered. By decomposing the events that preceded an unwanted outcome, such as an accident, into the set of decision-problem conditions that were resolved, a more comprehensive understanding is possible. All human-error accidents are not caused by faulty decision-problem resolutions, but it appears to be one of the major areas of accidents cited in the literature. A three-phase methodology is presented which accommodates a wide spectrum of events. It allows for a systems content analysis of the available data to establish: (1) the resolutions made, (2) alternatives not considered, (3) resolutions missed, and (4) possible conditions not considered. The product is a map of the decision-problem conditions that were encountered as well as a projected, assumed set of conditions that should have been considered. The application of this methodology introduces a systematic approach to decomposing the events that transpired prior to the accident. The initial emphasis is on decision and problem resolution. The technique allows for a standardized method of accident into a scenario which may used for review or the development of a training simulation.

  11. Reduction in Chemotherapy Mixing Errors Using Six Sigma: Illinois CancerCare Experience.

    PubMed

    Heard, Bridgette; Miller, Laura; Kumar, Pankaj

    2012-03-01

    Chemotherapy mixing errors (CTMRs), although rare, have serious consequences. Illinois CancerCare is a large practice with multiple satellite offices. The goal of this study was to reduce the number of CTMRs using Six Sigma methods. A Six Sigma team consisting of five participants (registered nurses and pharmacy technicians [PTs]) was formed. The team had 10 hours of Six Sigma training in the DMAIC (ie, Define, Measure, Analyze, Improve, Control) process. Measurement of errors started from the time the CT order was verified by the PT to the time of CT administration by the nurse. Data collection included retrospective error tracking software, system audits, and staff surveys. Root causes of CTMRs included inadequate knowledge of CT mixing protocol, inconsistencies in checking methods, and frequent changes in staffing of clinics. Initial CTMRs (n = 33,259) constituted 0.050%, with 77% of these errors affecting patients. The action plan included checklists, education, and competency testing. The postimplementation error rate (n = 33,376, annualized) over a 3-month period was reduced to 0.019%, with only 15% of errors affecting patients. Initial Sigma was calculated at 4.2; this process resulted in the improvement of Sigma to 5.2, representing a 100-fold reduction. Financial analysis demonstrated a reduction in annualized loss of revenue (administration charges and drug wastage) from $11,537.95 (Medicare Average Sales Price) before the start of the project to $1,262.40. The Six Sigma process is a powerful technique in the reduction of CTMRs.

  12. Diagnostic Error in Stroke-Reasons and Proposed Solutions.

    PubMed

    Bakradze, Ekaterina; Liberman, Ava L

    2018-02-13

    We discuss the frequency of stroke misdiagnosis and identify subgroups of stroke at high risk for specific diagnostic errors. In addition, we review common reasons for misdiagnosis and propose solutions to decrease error. According to a recent report by the National Academy of Medicine, most people in the USA are likely to experience a diagnostic error during their lifetimes. Nearly half of such errors result in serious disability and death. Stroke misdiagnosis is a major health care concern, with initial misdiagnosis estimated to occur in 9% of all stroke patients in the emergency setting. Under- or missed diagnosis (false negative) of stroke can result in adverse patient outcomes due to the preclusion of acute treatments and failure to initiate secondary prevention strategies. On the other hand, the overdiagnosis of stroke can result in inappropriate treatment, delayed identification of actual underlying disease, and increased health care costs. Young patients, women, minorities, and patients presenting with non-specific, transient, or posterior circulation stroke symptoms are at increased risk of misdiagnosis. Strategies to decrease diagnostic error in stroke have largely focused on early stroke detection via bedside examination strategies and a clinical decision rules. Targeted interventions to improve the diagnostic accuracy of stroke diagnosis among high-risk groups as well as symptom-specific clinical decision supports are needed. There are a number of open questions in the study of stroke misdiagnosis. To improve patient outcomes, existing strategies to improve stroke diagnostic accuracy should be more broadly adopted and novel interventions devised and tested to reduce diagnostic errors.

  13. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less

  14. In-line monitoring of cocrystallization process and quantification of carbamazepine-nicotinamide cocrystal using Raman spectroscopy and chemometric tools.

    PubMed

    Soares, Frederico L F; Carneiro, Renato L

    2017-06-05

    A cocrystallization process may involve several molecular species, which are generally solid under ambient conditions. Thus, accurate monitoring of different components that might appear during the reaction is necessary, as well as quantification of the final product. This work reports for the first time the synthesis of carbamazepine-nicotinamide cocrystal in aqueous media with a full conversion. The reactions were monitored by Raman spectroscopy coupled with Multivariate Curve Resolution - Alternating Least Squares, and the quantification of the final product among its coformers was performed using Raman spectroscopy and Partial Least Squares regression. The slurry reaction was made in four different conditions: room temperature, 40°C, 60°C and 80°C. The slurry reaction at 80°C enabled a full conversion of initial substrates into the cocrystal form, using water as solvent for a greener method. The employment of MCR-ALS coupled with Raman spectroscopy enabled to observe the main steps of the reactions, such as drug dissolution, nucleation and crystallization of the cocrystal. The PLS models gave mean errors of cross validation around 2.0 (% wt/wt), and errors of validation between 2.5 and 8.2 (% wt/wt) for all components. These were good results since the spectra of cocrystals and the physical mixture of the coformers present some similar peaks. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. In-line monitoring of cocrystallization process and quantification of carbamazepine-nicotinamide cocrystal using Raman spectroscopy and chemometric tools

    NASA Astrophysics Data System (ADS)

    Soares, Frederico L. F.; Carneiro, Renato L.

    2017-06-01

    A cocrystallization process may involve several molecular species, which are generally solid under ambient conditions. Thus, accurate monitoring of different components that might appear during the reaction is necessary, as well as quantification of the final product. This work reports for the first time the synthesis of carbamazepine-nicotinamide cocrystal in aqueous media with a full conversion. The reactions were monitored by Raman spectroscopy coupled with Multivariate Curve Resolution - Alternating Least Squares, and the quantification of the final product among its coformers was performed using Raman spectroscopy and Partial Least Squares regression. The slurry reaction was made in four different conditions: room temperature, 40 °C, 60 °C and 80 °C. The slurry reaction at 80 °C enabled a full conversion of initial substrates into the cocrystal form, using water as solvent for a greener method. The employment of MCR-ALS coupled with Raman spectroscopy enabled to observe the main steps of the reactions, such as drug dissolution, nucleation and crystallization of the cocrystal. The PLS models gave mean errors of cross validation around 2.0 (% wt/wt), and errors of validation between 2.5 and 8.2 (% wt/wt) for all components. These were good results since the spectra of cocrystals and the physical mixture of the coformers present some similar peaks.

  16. Adaptive error correction codes for face identification

    NASA Astrophysics Data System (ADS)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2012-06-01

    Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.

  17. Straightening of a wavy strip: An elastic-plastic contact problem including snap-through

    NASA Technical Reports Server (NTRS)

    Fischer, D. F.; Rammerstorfer, F. G.

    1980-01-01

    The nonlinear behavior of a wave like deformed metal strip during the levelling process were calculated. Elastic-plastic material behavior as well as nonlinearities due to large deformations were considered. The considered problem lead to a combined stability and contact problem. It is shown that, despite the initially concentrated loading, neglecting the change of loading conditions due to altered contact domains may lead to a significant error in the evaluation of the nonlinear behavior and particularly to an underestimation of the stability limit load. The stability was examined by considering the load deflection path and the behavior of a load-dependent current stiffness parameter in combination with the determinant of the current stiffness matrix.

  18. Reliability of perceived neighbourhood conditions and the effects of measurement error on self-rated health across urban and rural neighbourhoods.

    PubMed

    Pruitt, Sandi L; Jeffe, Donna B; Yan, Yan; Schootman, Mario

    2012-04-01

    Limited psychometric research has examined the reliability of self-reported measures of neighbourhood conditions, the effect of measurement error on associations between neighbourhood conditions and health, and potential differences in the reliabilities between neighbourhood strata (urban vs rural and low vs high poverty). We assessed overall and stratified reliability of self-reported perceived neighbourhood conditions using five scales (social and physical disorder, social control, social cohesion, fear) and four single items (multidimensional neighbouring). We also assessed measurement error-corrected associations of these conditions with self-rated health. Using random-digit dialling, 367 women without breast cancer (matched controls from a larger study) were interviewed twice, 2-3 weeks apart. Test-retest (intraclass correlation coefficients (ICC)/weighted κ) and internal consistency reliability (Cronbach's α) were assessed. Differences in reliability across neighbourhood strata were tested using bootstrap methods. Regression calibration corrected estimates for measurement error. All measures demonstrated satisfactory internal consistency (α ≥ 0.70) and either moderate (ICC/κ=0.41-0.60) or substantial (ICC/κ=0.61-0.80) test-retest reliability in the full sample. Internal consistency did not differ by neighbourhood strata. Test-retest reliability was significantly lower among rural (vs urban) residents for two scales (social control, physical disorder) and two multidimensional neighbouring items; test-retest reliability was higher for physical disorder and lower for one multidimensional neighbouring item among the high (vs low) poverty strata. After measurement error correction, the magnitude of associations between neighbourhood conditions and self-rated health were larger, particularly in the rural population. Research is needed to develop and test reliable measures of perceived neighbourhood conditions relevant to the health of rural populations.

  19. Do Work Condition Interventions Affect Quality and Errors in Primary Care? Results from the Healthy Work Place Study.

    PubMed

    Linzer, Mark; Poplau, Sara; Brown, Roger; Grossman, Ellie; Varkey, Anita; Yale, Steven; Williams, Eric S; Hicks, Lanis; Wallock, Jill; Kohnhorst, Diane; Barbouche, Michael

    2017-01-01

    While primary care work conditions are associated with adverse clinician outcomes, little is known about the effect of work condition interventions on quality or safety. A cluster randomized controlled trial of 34 clinics in the upper Midwest and New York City. Primary care clinicians and their diabetic and hypertensive patients. Quality improvement projects to improve communication between providers, workflow design, and chronic disease management. Intervention clinics received brief summaries of their clinician and patient outcome data at baseline. We measured work conditions and clinician and patient outcomes both at baseline and 6-12 months post-intervention. Multilevel regression analyses assessed the impact of work condition changes on outcomes. Subgroup analyses assessed impact by intervention category. There were no significant differences in error reduction (19 % vs. 11 %, OR of improvement 1.84, 95 % CI 0.70, 4.82, p = 0.21) or quality of care improvement (19 % improved vs. 44 %, OR 0.62, 95 % CI 0.58, 1.21, p = 0.42) between intervention and control clinics. The conceptual model linking work conditions, provider outcomes, and error reduction showed significant relationships between work conditions and provider outcomes (p ≤ 0.001) and a trend toward a reduced error rate in providers with lower burnout (OR 1.44, 95 % CI 0.94, 2.23, p = 0.09). Few quality metrics, short time span, fewer clinicians recruited than anticipated. Work-life interventions improving clinician satisfaction and well-being do not necessarily reduce errors or improve quality. Longer, more focused interventions may be needed to produce meaningful improvements in patient care. ClinicalTrials.gov # NCT02542995.

  20. Application of the Markov Chain Monte Carlo method for snow water equivalent retrieval based on passive microwave measurements

    NASA Astrophysics Data System (ADS)

    Pan, J.; Durand, M. T.; Vanderjagt, B. J.

    2015-12-01

    Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.

Top