Prediction of final error level in learning and repetitive control
NASA Astrophysics Data System (ADS)
Levoci, Peter A.
Repetitive control (RC) is a field that creates controllers to eliminate the effects of periodic disturbances on a feedback control system. The methods have applications in spacecraft problems, to isolate fine pointing equipment from periodic vibration disturbances such as slight imbalances in momentum wheels or cryogenic pumps. A closely related field of control design is iterative learning control (ILC) which aims to eliminate tracking error in a task that repeats, each time starting from the same initial condition. Experiments done on a robot at NASA Langley Research Center showed that the final error levels produced by different candidate repetitive and learning controllers can be very different, even when each controller is analytically proven to converge to zero error in the deterministic case. Real world plant and measurement noise and quantization noise (from analog to digital and digital to analog converters) in these control methods are acted on as if they were error sources that will repeat and should be cancelled, which implies that the algorithms amplify such errors. Methods are developed that predict the final error levels of general first order ILC, of higher order ILC including current cycle learning, and of general RC, in the presence of noise, using frequency response methods. The method involves much less computation than the corresponding time domain approach that involves large matrices. The time domain approach was previously developed for ILC and handles a certain class of ILC methods. Here methods are created to include zero-phase filtering that is very important in creating practical designs. Also, time domain methods are developed for higher order ILC and for repetitive control. Since RC and ILC must be implemented digitally, all of these methods predict final error levels at the sample times. It is shown here that RC can easily converge to small error levels between sample times, but that ILC in most applications will have large and diverging intersample error if in fact zero error is reached at the sample times. This is independent of the ILC law used, and is purely a property of the physical system. Methods are developed to address this issue.
Online automatic tuning and control for fed-batch cultivation
van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.
2007-01-01
Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554
NASA Astrophysics Data System (ADS)
Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin
2017-02-01
This paper exploits an error tracking control method for overhead crane systems for which the error trajectories for the trolley and the payload swing can be pre-specified. The proposed method does not require that the initial payload swing angle remains zero, whereas this requirement is usually assumed in conventional methods. The significant feature of the proposed method is its superior control performance as well as its strong robustness over different or uncertain rope lengths, payload masses, desired positions, initial payload swing angles, and external disturbances. Owing to the same attenuation behavior, the desired error trajectory for the trolley for each traveling distance is not needed to be reset, which is easy to implement in practical applications. By converting the error tracking overhead crane dynamics to the objective system, we obtain the error tracking control law for arbitrary initial payload swing angles. Lyapunov techniques and LaSalle's invariance theorem are utilized to prove the convergence and stability of the closed-loop system. Simulation and experimental results are illustrated to validate the superior performance of the proposed error tracking control method.
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
Attitude and vibration control of a large flexible space-based antenna
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1982-01-01
Control systems synthesis is considered for controlling the rigid body attitude and elastic motion of a large deployable space-based antenna. Two methods for control systems synthesis are considered. The first method utilizes the stability and robustness properties of the controller consisting of torque actuators and collocated attitude and rate sensors. The second method is based on the linear-quadratic-Gaussian control theory. A combination of the two methods, which results in a two level hierarchical control system, is also briefly discussed. The performance of the controllers is analyzed by computing the variances of pointing errors, feed misalignment errors and surface contour errors in the presence of sensor and actuator noise.
Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor
2011-05-14
In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
NASA Astrophysics Data System (ADS)
Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi
2017-01-01
This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.
Song, Tianxiao; Wang, Xueyun; Liang, Wenwei; Xing, Li
2018-05-14
Benefiting from frame structure, RINS can improve the navigation accuracy by modulating the inertial sensor errors with proper rotation scheme. In the traditional motor control method, the measurements of the photoelectric encoder are always adopted to drive inertial measurement unit (IMU) to rotate. However, when carrier conducts heading motion, the inertial sensor errors may no longer be zero-mean in navigation coordinate. Meanwhile, some high-speed carriers like aircraft need to roll a certain angle to balance the centrifugal force during the heading motion, which may result in non-negligible coupling errors, caused by the FOG installation errors and scale factor errors. Moreover, the error parameters of FOG are susceptible to the temperature and magnetic field, and the pre-calibration is a time-consuming process which is difficult to completely suppress the FOG-related errors. In this paper, an improved motor control method with the measurements of FOG is proposed to address these problems, with which the outer frame can insulate the carrier's roll motion and the inner frame can simultaneously achieve the rotary modulation on the basis of insulating the heading motion. The results of turntable experiments indicate that the navigation performance of dual-axis RINS has been significantly improved over the traditional method, which could still be maintained even with large FOG installation errors and scale factor errors, proving that the proposed method can relax the requirements for the accuracy of FOG-related errors.
Optimized method for manufacturing large aspheric surfaces
NASA Astrophysics Data System (ADS)
Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui
2007-12-01
Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.
He, Jianbo; Li, Jijie; Huang, Zhongwen; Zhao, Tuanjie; Xing, Guangnan; Gai, Junyi; Guan, Rongzhan
2015-01-01
Experimental error control is very important in quantitative trait locus (QTL) mapping. Although numerous statistical methods have been developed for QTL mapping, a QTL detection model based on an appropriate experimental design that emphasizes error control has not been developed. Lattice design is very suitable for experiments with large sample sizes, which is usually required for accurate mapping of quantitative traits. However, the lack of a QTL mapping method based on lattice design dictates that the arithmetic mean or adjusted mean of each line of observations in the lattice design had to be used as a response variable, resulting in low QTL detection power. As an improvement, we developed a QTL mapping method termed composite interval mapping based on lattice design (CIMLD). In the lattice design, experimental errors are decomposed into random errors and block-within-replication errors. Four levels of block-within-replication errors were simulated to show the power of QTL detection under different error controls. The simulation results showed that the arithmetic mean method, which is equivalent to a method under random complete block design (RCBD), was very sensitive to the size of the block variance and with the increase of block variance, the power of QTL detection decreased from 51.3% to 9.4%. In contrast to the RCBD method, the power of CIMLD and the adjusted mean method did not change for different block variances. The CIMLD method showed 1.2- to 7.6-fold higher power of QTL detection than the arithmetic or adjusted mean methods. Our proposed method was applied to real soybean (Glycine max) data as an example and 10 QTLs for biomass were identified that explained 65.87% of the phenotypic variation, while only three and two QTLs were identified by arithmetic and adjusted mean methods, respectively.
Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.
Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing
2016-01-01
The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
NASA Astrophysics Data System (ADS)
Jung, Hoeryong; Nguyen, Ho Anh Duc; Choi, Jaeho; Yim, Hongsik; Shin, Kee-Hyun
2018-05-01
The roll-to-roll (R2R) gravure printing method is increasingly being utilized to fabricate electronic devices such as organic thin-film transistor (OTFT), radio-frequency identification (RFID) tags, and flexible PCB owing to its characteristics of high throughput and large area. High precision registration is crucial to satisfy the demand for device miniaturization, the improvement of resolution and accuracy. This paper presents a novel register control method that uses an active motion-based roller (AMBR) to reduce register error in R2R gravure printing. Instead of shifting the phase of the downstream printing roller, which leads to undesired tension disturbance, the 1 degree-of-freedom (1-DOF) mechanical device AMBR is used to compensate for web elongation by controlling its motion according to the register error. The performance of the proposed control method is verified through simulations and experiments, and the results show that the proposed register control method using the AMBR could maintain a register error under ±15 µm.
An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan
2008-01-01
This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.
Carrez, Laurent; Bouchoud, Lucie; Fleury-Souverain, Sandrine; Combescure, Christophe; Falaschi, Ludivine; Sadeghipour, Farshid; Bonnabry, Pascal
2017-03-01
Background and objectives Centralized chemotherapy preparation units have established systematic strategies to avoid errors. Our work aimed to evaluate the accuracy of manual preparations associated with different control methods. Method A simulation study in an operational setting used phenylephrine and lidocaine as markers. Each operator prepared syringes that were controlled using a different method during each of three sessions (no control, visual double-checking, and gravimetric control). Eight reconstitutions and dilutions were prepared in each session, with variable doses and volumes, using different concentrations of stock solutions. Results were analyzed according to qualitative (choice of stock solution) and quantitative criteria (accurate, <5% deviation from the target concentration; weakly accurate, 5%-10%; inaccurate, 10%-30%; wrong, >30% deviation). Results Eleven operators carried out 19 sessions. No final preparation (n = 438) contained a wrong drug. The protocol involving no control failed to detect 1 of 3 dose errors made and double-checking failed to detect 3 of 7 dose errors. The gravimetric control method detected all 5 out of 5 dose errors. The accuracy of the doses measured was equivalent across the control methods ( p = 0.63 Kruskal-Wallis). The final preparations ranged from 58% to 60% accurate, 25% to 27% weakly accurate, 14% to 17% inaccurate and 0.9% wrong. A high variability was observed between operators. Discussion Gravimetric control was the only method able to detect all dose errors, but it did not improve dose accuracy. A dose accuracy with <5% deviation cannot always be guaranteed using manual production. Automation should be considered in the future.
1984-06-01
space discretization error . 1. I 3 1. INTRODUCTION Reaction- diffusion processes occur in many branches of biology and physical chemistry. Examples...to model reaction- diffusion phenomena. The primary goal of this adaptive method is to keep a particular norm of the space discretization error less...AD-A142 253 AN ADAPTIVE MET6 OFD LNES WITH ERROR CONTROL FOR 1 INST FOR PHYSICAL SCIENCE AND TECH. I BABUSKAAAO C7 EA OH S UMR AN UNVC EEP R
Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R
NASA Astrophysics Data System (ADS)
Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.
2016-12-01
Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.
Uga, Minako; Dan, Ippeita; Dan, Haruka; Kyutoku, Yasushi; Taguchi, Y-h; Watanabe, Eiju
2015-01-01
Abstract. Recent advances in multichannel functional near-infrared spectroscopy (fNIRS) allow wide coverage of cortical areas while entailing the necessity to control family-wise errors (FWEs) due to increased multiplicity. Conventionally, the Bonferroni method has been used to control FWE. While Type I errors (false positives) can be strictly controlled, the application of a large number of channel settings may inflate the chance of Type II errors (false negatives). The Bonferroni-based methods are especially stringent in controlling Type I errors of the most activated channel with the smallest p value. To maintain a balance between Types I and II errors, effective multiplicity (Meff) derived from the eigenvalues of correlation matrices is a method that has been introduced in genetic studies. Thus, we explored its feasibility in multichannel fNIRS studies. Applying the Meff method to three kinds of experimental data with different activation profiles, we performed resampling simulations and found that Meff was controlled at 10 to 15 in a 44-channel setting. Consequently, the number of significantly activated channels remained almost constant regardless of the number of measured channels. We demonstrated that the Meff approach can be an effective alternative to Bonferroni-based methods for multichannel fNIRS studies. PMID:26157982
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
Control method and system for hydraulic machines employing a dynamic joint motion model
Danko, George [Reno, NV
2011-11-22
A control method and system for controlling a hydraulically actuated mechanical arm to perform a task, the mechanical arm optionally being a hydraulically actuated excavator arm. The method can include determining a dynamic model of the motion of the hydraulic arm for each hydraulic arm link by relating the input signal vector for each respective link to the output signal vector for the same link. Also the method can include determining an error signal for each link as the weighted sum of the differences between a measured position and a reference position and between the time derivatives of the measured position and the time derivatives of the reference position for each respective link. The weights used in the determination of the error signal can be determined from the constant coefficients of the dynamic model. The error signal can be applied in a closed negative feedback control loop to diminish or eliminate the error signal for each respective link.
Robot-Arm Dynamic Control by Computer
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Tarn, Tzyh J.; Chen, Yilong J.
1987-01-01
Feedforward and feedback schemes linearize responses to control inputs. Method for control of robot arm based on computed nonlinear feedback and state tranformations to linearize system and decouple robot end-effector motions along each of cartesian axes augmented with optimal scheme for correction of errors in workspace. Major new feature of control method is: optimal error-correction loop directly operates on task level and not on joint-servocontrol level.
Modeling and Control of a Tailsitter with a Ducted Fan
NASA Astrophysics Data System (ADS)
Argyle, Matthew Elliott
There are two traditional aircraft categories: fixed-wing which have a long endurance and a high cruise airspeed and rotorcraft which can take-off and land vertically. The tailsitter is a type of aircraft that has the strengths of both platforms, with no additional mechanical complexity, because it takes off and lands vertically on its tail and can transition the entire aircraft horizontally into high-speed flight. In this dissertation, we develop the entire control system for a tailsitter with a ducted fan. The standard method to compute the quaternion-based attitude error does not generate ideal trajectories for a hovering tailsitter for some situations. In addition, the only approach in the literature to mitigate this breaks down for large attitude errors. We develop an alternative quaternion-based error method which generates better trajectories than the standard approach and can handle large errors. We also derive a hybrid backstepping controller with almost global asymptotic stability based on this error method. Many common altitude and airspeed control schemes for a fixed-wing airplane assume that the altitude and airspeed dynamics are decoupled which leads to errors. The Total Energy Control System (TECS) is an approach that controls the altitude and airspeed by manipulating the total energy rate and energy distribution rate, of the aircraft, in a manner which accounts for the dynamic coupling. In this dissertation, a nonlinear controller, which can handle inaccurate thrust and drag models, based on the TECS principles is derived. Simulation results show that the nonlinear controller has better performance than the standard PI TECS control schemes. Most constant altitude transitions are accomplished by generating an optimal trajectory, and potentially actuator inputs, based on a high fidelity model of the aircraft. While there are several approaches to mitigate the effects of modeling errors, these do not fully remove the accurate model requirement. In this dissertation, we develop two different approaches that can achieve near constant altitude transitions for some types of aircraft. The first method, based on multiple LQR controllers, requires a high fidelity model of the aircraft. However, the second method, based on the energy along the body axes, requires almost no aerodynamic information.
Sliding mode output feedback control based on tracking error observer with disturbance estimator.
Xiao, Lingfei; Zhu, Yue
2014-07-01
For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, S
2015-06-15
Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Berrocal, Eduardo; Cappello, Franck
The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less
Head repositioning accuracy to neutral: a comparative study of error calculation.
Hill, Robert; Jensen, Pål; Baardsen, Tor; Kulvik, Kristian; Jull, Gwendolen; Treleaven, Julia
2009-02-01
Deficits in cervical proprioception have been identified in subjects with neck pain through the measure of head repositioning accuracy (HRA). Nevertheless there appears to be no general consensus regarding the construct of measurement of error used for calculating HRA. This study investigated four different mathematical methods of measurement of error to determine if there were any differences in their ability to discriminate between a control group and subjects with a whiplash associated disorder. The four methods for measuring cervical joint position error were calculated using a previous data set consisting of 50 subjects with whiplash complaining of dizziness (WAD D), 50 subjects with whiplash not complaining of dizziness (WAD ND) and 50 control subjects. The results indicated that no one measure of HRA uniquely detected or defined the differences between the whiplash and control groups. Constant error (CE) was significantly different between the whiplash and control groups from extension (p<0.05). Absolute errors (AEs) and root mean square errors (RMSEs) demonstrated differences between the two WAD groups in rotation trials (p<0.05). No differences were seen with variable error (VE). The results suggest that a combination of AE (or RMSE) and CE are probably the most suitable measures for analysis of HRA.
Gas turbine engine control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idelchik, M.S.
1991-02-19
This paper describes a method for controlling a gas turbine engine. It includes receiving an error signal and processing the error signal to form a primary control signal; receiving at least one anticipatory demand signal and processing the signal to form an anticipatory fuel control signal.
Yang, Yana; Hua, Changchun; Guan, Xinping
2016-03-01
Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.
Disturbance torque rejection properties of the NASA/JPL 70-meter antenna axis servos
NASA Technical Reports Server (NTRS)
Hill, R. E.
1989-01-01
Analytic methods for evaluating pointing errors caused by external disturbance torques are developed and applied to determine the effects of representative values of wind and friction torque. The expressions relating pointing errors to disturbance torques are shown to be strongly dependent upon the state estimator parameters, as well as upon the state feedback gain and the flow versus pressure characteristics of the hydraulic system. Under certain conditions, when control is derived from an uncorrected estimate of integral position error, the desired type 2 servo properties are not realized and finite steady-state position errors result. Methods for reducing these errors to negligible proportions through the proper selection of control gain and estimator correction parameters are demonstrated. The steady-state error produced by a disturbance torque is found to be directly proportional to the hydraulic internal leakage. This property can be exploited to provide a convenient method of determining system leakage from field measurements of estimator error, axis rate, and hydraulic differential pressure.
Quality Assurance of Chemical Measurements.
ERIC Educational Resources Information Center
Taylor, John K.
1981-01-01
Reviews aspects of quality control (methods to control errors) and quality assessment (verification that systems are operating within acceptable limits) including an analytical measurement system, quality control by inspection, control charts, systematic errors, and use of SRMs, materials for which properties are certified by the National Bureau…
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji; Sano, Kousuke
This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.
Cascade control of superheated steam temperature with neuro-PID controller.
Zhang, Jianhua; Zhang, Fenfang; Ren, Mifeng; Hou, Guolian; Fang, Fang
2012-11-01
In this paper, an improved cascade control methodology for superheated processes is developed, in which the primary PID controller is implemented by neural networks trained by minimizing error entropy criterion. The entropy of the tracking error can be estimated recursively by utilizing receding horizon window technique. The measurable disturbances in superheated processes are input to the neuro-PID controller besides the sequences of tracking error in outer loop control system, hence, feedback control is combined with feedforward control in the proposed neuro-PID controller. The convergent condition of the neural networks is analyzed. The implementation procedures of the proposed cascade control approach are summarized. Compared with the neuro-PID controller using minimizing squared error criterion, the proposed neuro-PID controller using minimizing error entropy criterion may decrease fluctuations of the superheated steam temperature. A simulation example shows the advantages of the proposed method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Data driven CAN node reliability assessment for manufacturing system
NASA Astrophysics Data System (ADS)
Zhang, Leiming; Yuan, Yong; Lei, Yong
2017-01-01
The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Gas turbine engine control system
NASA Technical Reports Server (NTRS)
Idelchik, Michael S. (Inventor)
1991-01-01
A control system and method of controlling a gas turbine engine. The control system receives an error signal and processes the error signal to form a primary fuel control signal. The control system also receives at least one anticipatory demand signal and processes the signal to form an anticipatory fuel control signal. The control system adjusts the value of the anticipatory fuel control signal based on the value of the error signal to form an adjusted anticipatory signal and then the adjusted anticipatory fuel control signal and the primary fuel control signal are combined to form a fuel command signal.
Data entry errors and design for model-based tight glycemic control in critical care.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Tight glycemic control (TGC) has shown benefits but has been difficult to achieve consistently. Model-based methods and computerized protocols offer the opportunity to improve TGC quality but require human data entry, particularly of blood glucose (BG) values, which can be significantly prone to error. This study presents the design and optimization of data entry methods to minimize error for a computerized and model-based TGC method prior to pilot clinical trials. To minimize data entry error, two tests were carried out to optimize a method with errors less than the 5%-plus reported in other studies. Four initial methods were tested on 40 subjects in random order, and the best two were tested more rigorously on 34 subjects. The tests measured entry speed and accuracy. Errors were reported as corrected and uncorrected errors, with the sum comprising a total error rate. The first set of tests used randomly selected values, while the second set used the same values for all subjects to allow comparisons across users and direct assessment of the magnitude of errors. These research tests were approved by the University of Canterbury Ethics Committee. The final data entry method tested reduced errors to less than 1-2%, a 60-80% reduction from reported values. The magnitude of errors was clinically significant and was typically by 10.0 mmol/liter or an order of magnitude but only for extreme values of BG < 2.0 mmol/liter or BG > 15.0-20.0 mmol/liter, both of which could be easily corrected with automated checking of extreme values for safety. The data entry method selected significantly reduced data entry errors in the limited design tests presented, and is in use on a clinical pilot TGC study. The overall approach and testing methods are easily performed and generalizable to other applications and protocols. © 2012 Diabetes Technology Society.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
Attitude guidance and tracking for spacecraft with two reaction wheels
NASA Astrophysics Data System (ADS)
Biggs, James D.; Bai, Yuliang; Henninger, Helen
2018-04-01
This paper addresses the guidance and tracking problem for a rigid-spacecraft using two reaction wheels (RWs). The guidance problem is formulated as an optimal control problem on the special orthogonal group SO(3). The optimal motion is solved analytically as a function of time and is used to reduce the original guidance problem to one of computing the minimum of a nonlinear function. A tracking control using two RWs is developed that extends previous singular quaternion stabilisation controls to tracking controls on the rotation group. The controller is proved to locally asymptotically track the generated reference motions using Lyapunov's direct method. Simulations of a 3U CubeSat demonstrate that this tracking control is robust to initial rotation errors and angular velocity errors in the controlled axis. For initial angular velocity errors in the uncontrolled axis and under significant disturbances the control fails to track. However, the singular tracking control is combined with a nano-magnetic torquer which simply damps the angular velocity in the uncontrolled axis and is shown to provide a practical control method for tracking in the presence of disturbances and initial condition errors.
Zhang, Xingwu; Wang, Chenxi; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Wang, Shibin
2016-01-01
Milling vibration is one of the most serious factors affecting machining quality and precision. In this paper a novel hybrid error criterion-based frequency-domain LMS active control method is constructed and used for vibration suppression of milling processes by piezoelectric actuators and sensors, in which only one Fast Fourier Transform (FFT) is used and no Inverse Fast Fourier Transform (IFFT) is involved. The correction formulas are derived by a steepest descent procedure and the control parameters are analyzed and optimized. Then, a novel hybrid error criterion is constructed to improve the adaptability, reliability and anti-interference ability of the constructed control algorithm. Finally, based on piezoelectric actuators and acceleration sensors, a simulation of a spindle and a milling process experiment are presented to verify the proposed method. Besides, a protection program is added in the control flow to enhance the reliability of the control method in applications. The simulation and experiment results indicate that the proposed method is an effective and reliable way for on-line vibration suppression, and the machining quality can be obviously improved. PMID:26751448
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Choi, G.; Iyer, R. K.
1990-01-01
A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.
Modeling human response errors in synthetic flight simulator domain
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.
1992-01-01
This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.
Indirect learning control for nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Ryu, Yeong Soon; Longman, Richard W.
1993-01-01
In a previous paper, learning control algorithms were developed based on adaptive control ideas for linear time variant systems. The learning control methods were shown to have certain advantages over their adaptive control counterparts, such as the ability to produce zero tracking error in time varying systems, and the ability to eliminate repetitive disturbances. In recent years, certain adaptive control algorithms have been developed for multi-body dynamic systems such as robots, with global guaranteed convergence to zero tracking error for the nonlinear system euations. In this paper we study the relationship between such adaptive control methods designed for this specific class of nonlinear systems, and the learning control problem for such systems, seeking to converge to zero tracking error in following a specific command repeatedly, starting from the same initial conditions each time. The extension of these methods from the adaptive control problem to the learning control problem is seen to be trivial. The advantages and disadvantages of using learning control based on such adaptive control concepts for nonlinear systems, and the use of other currently available learning control algorithms are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt
2004-08-01
It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Yan, Song; Li, Yun
2014-02-15
Despite its great capability to detect rare variant associations, next-generation sequencing is still prohibitively expensive when applied to large samples. In case-control studies, it is thus appealing to sequence only a subset of cases to discover variants and genotype the identified variants in controls and the remaining cases under the reasonable assumption that causal variants are usually enriched among cases. However, this approach leads to inflated type-I error if analyzed naively for rare variant association. Several methods have been proposed in recent literature to control type-I error at the cost of either excluding some sequenced cases or correcting the genotypes of discovered rare variants. All of these approaches thus suffer from certain extent of information loss and thus are underpowered. We propose a novel method (BETASEQ), which corrects inflation of type-I error by supplementing pseudo-variants while keeps the original sequence and genotype data intact. Extensive simulations and real data analysis demonstrate that, in most practical situations, BETASEQ leads to higher testing powers than existing approaches with guaranteed (controlled or conservative) type-I error. BETASEQ and associated R files, including documentation, examples, are available at http://www.unc.edu/~yunmli/betaseq
Movement decoupling control for two-axis fast steering mirror
NASA Astrophysics Data System (ADS)
Wang, Rui; Qiao, Yongming; Lv, Tao
2017-02-01
Based on flexure hinge and piezoelectric actuator of two-axis fast steering mirror is a complex system with time varying, uncertain and strong coupling. It is extremely difficult to achieve high precision decoupling control with the traditional PID control method. The feedback error learning method was established an inverse hysteresis model which was based inner product dynamic neural network nonlinear and no-smooth for piezo-ceramic. In order to improve the actuator high precision, a method was proposed, which was based piezo-ceramic inverse model of two dynamic neural network adaptive control. The experiment result indicated that, compared with two neural network adaptive movement decoupling control algorithm, static relative error is reduced from 4.44% to 0.30% and coupling degree is reduced from 12.71% to 0.60%, while dynamic relative error is reduced from 13.92% to 2.85% and coupling degree is reduced from 2.63% to 1.17%.
NASA Astrophysics Data System (ADS)
Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng
2016-10-01
The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.
Hybrid Adaptive Flight Control with Model Inversion Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2011-01-01
This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.
Correcting groove error in gratings ruled on a 500-mm ruling engine using interferometric control.
Mi, Xiaotao; Yu, Haili; Yu, Hongzhu; Zhang, Shanwen; Li, Xiaotian; Yao, Xuefeng; Qi, Xiangdong; Bayinhedhig; Wan, Qiuhua
2017-07-20
Groove error is one of the most important factors affecting grating quality and spectral performance. To reduce groove error, we propose a new ruling-tool carriage system based on aerostatic guideways. We design a new blank carriage system with double piezoelectric actuators. We also propose a completely closed-loop servo-control system with a new optical measurement system that can control the position of the diamond relative to the blank. To evaluate our proposed methods, we produced several gratings, including an echelle grating with 79 grooves/mm, a grating with 768 grooves/mm, and a high-density grating with 6000 grooves/mm. The results show that our methods effectively reduce groove error in ruled gratings.
NASA Technical Reports Server (NTRS)
Li, Rongsheng (Inventor); Kurland, Jeffrey A. (Inventor); Dawson, Alec M. (Inventor); Wu, Yeong-Wei A. (Inventor); Uetrecht, David S. (Inventor)
2004-01-01
Methods and structures are provided that enhance attitude control during gyroscope substitutions by insuring that a spacecraft's attitude control system does not drive its absolute-attitude sensors out of their capture ranges. In a method embodiment, an operational process-noise covariance Q of a Kalman filter is temporarily replaced with a substantially greater interim process-noise covariance Q. This replacement increases the weight given to the most recent attitude measurements and hastens the reduction of attitude errors and gyroscope bias errors. The error effect of the substituted gyroscopes is reduced and the absolute-attitude sensors are not driven out of their capture range. In another method embodiment, this replacement is preceded by the temporary replacement of an operational measurement-noise variance R with a substantially larger interim measurement-noise variance R to reduce transients during the gyroscope substitutions.
Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung
2007-03-01
The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.
Aircraft system modeling error and control error
NASA Technical Reports Server (NTRS)
Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)
2012-01-01
A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1981-01-01
A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.
Systems and methods for data quality control and cleansing
Wenzel, Michael; Boettcher, Andrew; Drees, Kirk; Kummer, James
2016-05-31
A method for detecting and cleansing suspect building automation system data is shown and described. The method includes using processing electronics to automatically determine which of a plurality of error detectors and which of a plurality of data cleansers to use with building automation system data. The method further includes using processing electronics to automatically detect errors in the data and cleanse the data using a subset of the error detectors and a subset of the cleansers.
Stability and error estimation for Component Adaptive Grid methods
NASA Technical Reports Server (NTRS)
Oliger, Joseph; Zhu, Xiaolei
1994-01-01
Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.
Comparison of thruster configurations in attitude control systems. M.S. Thesis. Progress Report
NASA Technical Reports Server (NTRS)
Boland, J. S., III; Drinkard, D. M., Jr.; White, L. R.; Chakravarthi, K. R.
1973-01-01
Several aspects concerning reaction control jet systems as used to govern the attitude of a spacecraft were considered. A thruster configuration currently in use was compared to several new configurations developed in this study. The method of determining the error signals which control the firing of the thrusters was also investigated. The current error determination procedure is explained and a new method is presented. Both of these procedures are applied to each of the thruster configurations which are developed and comparisons of the two methods are made.
Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids
Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,
2000-01-01
Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.
Use of historical control data for assessing treatment effects in clinical trials.
Viele, Kert; Berry, Scott; Neuenschwander, Beat; Amzal, Billy; Chen, Fang; Enas, Nathan; Hobbs, Brian; Ibrahim, Joseph G; Kinnersley, Nelson; Lindborg, Stacy; Micallef, Sandrine; Roychoudhury, Satrajit; Thompson, Laura
2014-01-01
Clinical trials rarely, if ever, occur in a vacuum. Generally, large amounts of clinical data are available prior to the start of a study, particularly on the current study's control arm. There is obvious appeal in using (i.e., 'borrowing') this information. With historical data providing information on the control arm, more trial resources can be devoted to the novel treatment while retaining accurate estimates of the current control arm parameters. This can result in more accurate point estimates, increased power, and reduced type I error in clinical trials, provided the historical information is sufficiently similar to the current control data. If this assumption of similarity is not satisfied, however, one can acquire increased mean square error of point estimates due to bias and either reduced power or increased type I error depending on the direction of the bias. In this manuscript, we review several methods for historical borrowing, illustrating how key parameters in each method affect borrowing behavior, and then, we compare these methods on the basis of mean square error, power and type I error. We emphasize two main themes. First, we discuss the idea of 'dynamic' (versus 'static') borrowing. Second, we emphasize the decision process involved in determining whether or not to include historical borrowing in terms of the perceived likelihood that the current control arm is sufficiently similar to the historical data. Our goal is to provide a clear review of the key issues involved in historical borrowing and provide a comparison of several methods useful for practitioners. Copyright © 2013 John Wiley & Sons, Ltd.
Use of historical control data for assessing treatment effects in clinical trials
Viele, Kert; Berry, Scott; Neuenschwander, Beat; Amzal, Billy; Chen, Fang; Enas, Nathan; Hobbs, Brian; Ibrahim, Joseph G.; Kinnersley, Nelson; Lindborg, Stacy; Micallef, Sandrine; Roychoudhury, Satrajit; Thompson, Laura
2014-01-01
Clinical trials rarely, if ever, occur in a vacuum. Generally, large amounts of clinical data are available prior to the start of a study, particularly on the current study’s control arm. There is obvious appeal in using (i.e., ‘borrowing’) this information. With historical data providing information on the control arm, more trial resources can be devoted to the novel treatment while retaining accurate estimates of the current control arm parameters. This can result in more accurate point estimates, increased power, and reduced type I error in clinical trials, provided the historical information is sufficiently similar to the current control data. If this assumption of similarity is not satisfied, however, one can acquire increased mean square error of point estimates due to bias and either reduced power or increased type I error depending on the direction of the bias. In this manuscript, we review several methods for historical borrowing, illustrating how key parameters in each method affect borrowing behavior, and then, we compare these methods on the basis of mean square error, power and type I error. We emphasize two main themes. First, we discuss the idea of ‘dynamic’ (versus ‘static’) borrowing. Second, we emphasize the decision process involved in determining whether or not to include historical borrowing in terms of the perceived likelihood that the current control arm is sufficiently similar to the historical data. Our goal is to provide a clear review of the key issues involved in historical borrowing and provide a comparison of several methods useful for practitioners. PMID:23913901
Control apparatus and method for efficiently heating a fuel processor in a fuel cell system
Doan, Tien M.; Clingerman, Bruce J.
2003-08-05
A control apparatus and method for efficiently controlling the amount of heat generated by a fuel cell processor in a fuel cell system by determining a temperature error between actual and desired fuel processor temperatures. The temperature error is converted to a combustor fuel injector command signal or a heat dump valve position command signal depending upon the type of temperature error. Logic controls are responsive to the combustor fuel injector command signals and the heat dump valve position command signal to prevent the combustor fuel injector command signal from being generated if the heat dump valve is opened or, alternately, from preventing the heat dump valve position command signal from being generated if the combustor fuel injector is opened.
Passive quantum error correction of linear optics networks through error averaging
NASA Astrophysics Data System (ADS)
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
NASA Astrophysics Data System (ADS)
Cong, Wang; Xu, Lingdi; Li, Ang
2017-10-01
Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial-grade coordinate system nominal measurement accuracy PV value of 7 microns to 4microns, Which effectively improves the grinding efficiency of aspheric mirrors and verifies the correctness of the method. This paper also investigates the error detection and operation control method, the error calibration of the CMM and the random error calibration of the CMM .
NASA Astrophysics Data System (ADS)
Burman, Erik; Hansbo, Peter; Larson, Mats G.
2018-03-01
Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.
Solution of elastic-plastic stress analysis problems by the p-version of the finite element method
NASA Technical Reports Server (NTRS)
Szabo, Barna A.; Actis, Ricardo L.; Holzer, Stefan M.
1993-01-01
The solution of small strain elastic-plastic stress analysis problems by the p-version of the finite element method is discussed. The formulation is based on the deformation theory of plasticity and the displacement method. Practical realization of controlling discretization errors for elastic-plastic problems is the main focus. Numerical examples which include comparisons between the deformation and incremental theories of plasticity under tight control of discretization errors are presented.
Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong
2011-12-01
In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Differential-Drive Mobile Robot Control Design based-on Linear Feedback Control Law
NASA Astrophysics Data System (ADS)
Nurmaini, Siti; Dewi, Kemala; Tutuko, Bambang
2017-04-01
This paper deals with the problem of how to control differential driven mobile robot with simple control law. When mobile robot moves from one position to another to achieve a position destination, it always produce some errors. Therefore, a mobile robot requires a certain control law to drive the robot’s movement to the position destination with a smallest possible error. In this paper, in order to reduce position error, a linear feedback control is proposed with pole placement approach to regulate the polynoms desired. The presented work leads to an improved understanding of differential-drive mobile robot (DDMR)-based kinematics equation, which will assist to design of suitable controllers for DDMR movement. The result show by using the linier feedback control method with pole placement approach the position error is reduced and fast convergence is achieved.
NASA Astrophysics Data System (ADS)
Cottrell, Paul Edward
There is a lack of research in the area of hedging future contracts, especially in illiquid or very volatile market conditions. It is important to understand the volatility of the oil and currency markets because reduced fluctuations in these markets could lead to better hedging performance. This study compared different hedging methods by using a hedging error metric, supplementing the Receding Horizontal Control and Stochastic Programming (RHCSP) method by utilizing the London Interbank Offered Rate with the Levy process. The RHCSP hedging method was investigated to determine if improved hedging error was accomplished compared to the Black-Scholes, Leland, and Whalley and Wilmott methods when applied on simulated, oil, and currency futures markets. A modified RHCSP method was also investigated to determine if this method could significantly reduce hedging error under extreme market illiquidity conditions when applied on simulated, oil, and currency futures markets. This quantitative study used chaos theory and emergence for its theoretical foundation. An experimental research method was utilized for this study with a sample size of 506 hedging errors pertaining to historical and simulation data. The historical data were from January 1, 2005 through December 31, 2012. The modified RHCSP method was found to significantly reduce hedging error for the oil and currency market futures by the use of a 2-way ANOVA with a t test and post hoc Tukey test. This study promotes positive social change by identifying better risk controls for investment portfolios and illustrating how to benefit from high volatility in markets. Economists, professional investment managers, and independent investors could benefit from the findings of this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.
The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associatedmore » with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.« less
Analysis technique for controlling system wavefront error with active/adaptive optics
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Improved Conflict Detection for Reducing Operational Errors in Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Hainz
2003-01-01
An operational error is an incident in which an air traffic controller allows the separation between two aircraft to fall below the minimum separation standard. The rates of such errors in the US have increased significantly over the past few years. This paper proposes new detection methods that can help correct this trend by improving on the performance of Conflict Alert, the existing software in the Host Computer System that is intended to detect and warn controllers of imminent conflicts. In addition to the usual trajectory based on the flight plan, a "dead-reckoning" trajectory (current velocity projection) is also generated for each aircraft and checked for conflicts. Filters for reducing common types of false alerts were implemented. The new detection methods were tested in three different ways. First, a simple flightpath command language was developed t o generate precisely controlled encounters for the purpose of testing the detection software. Second, written reports and tracking data were obtained for actual operational errors that occurred in the field, and these were "replayed" to test the new detection algorithms. Finally, the detection methods were used to shadow live traffic, and performance was analysed, particularly with regard to the false-alert rate. The results indicate that the new detection methods can provide timely warnings of imminent conflicts more consistently than Conflict Alert.
A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Garg, Devendra P.
1998-01-01
This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.
Alignment control study for the solar optical telescope
NASA Technical Reports Server (NTRS)
1976-01-01
Analysis of the alignment and focus errors than can be tolerated, methods of sensing such errors, and mechanisms to make the necessary corrections were addressed. Alternate approaches and their relative merits were considered. The results of this study indicate that adequate alignment control can be achieved.
Backward-gazing method for measuring solar concentrators shape errors.
Coquand, Mathieu; Henault, François; Caliot, Cyril
2017-03-01
This paper describes a backward-gazing method for measuring the optomechanical errors of solar concentrating surfaces. It makes use of four cameras placed near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. Simple data processing then allows reconstructing the slope and shape errors of the surfaces. The originality of the method is enforced by the use of generalized quad-cell formulas and approximate mathematical relations between the slope errors of the mirrors and their reflected wavefront in the case of sun-tracking heliostats at high-incidence angles. Numerical simulations demonstrate that the measurement accuracy is compliant with standard requirements of solar concentrating optics in the presence of noise or calibration errors. The method is suited to fine characterization of the optical and mechanical errors of heliostats and their facets, or to provide better control for real-time sun tracking.
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
Nonlinear adaptive control system design with asymptotically stable parameter estimation error
NASA Astrophysics Data System (ADS)
Mishkov, Rumen; Darmonski, Stanislav
2018-01-01
The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.
Determination of Shift/Bias in Digital Aerial Triangulation of UAV Imagery Sequences
NASA Astrophysics Data System (ADS)
Wierzbicki, Damian
2017-12-01
Currently UAV Photogrammetry is characterized a largely automated and efficient data processing. Depicting from the low altitude more often gains on the meaning in the uses of applications as: cities mapping, corridor mapping, road and pipeline inspections or mapping of large areas e.g. forests. Additionally, high-resolution video image (HD and bigger) is more often use for depicting from the low altitude from one side it lets deliver a lot of details and characteristics of ground surfaces features, and from the other side is presenting new challenges in the data processing. Therefore, determination of elements of external orientation plays a substantial role the detail of Digital Terrain Models and artefact-free ortophoto generation. Parallel a research on the quality of acquired images from UAV and above the quality of products e.g. orthophotos are conducted. Despite so fast development UAV photogrammetry still exists the necessity of accomplishment Automatic Aerial Triangulation (AAT) on the basis of the observations GPS/INS and via ground control points. During low altitude photogrammetric flight, the approximate elements of external orientation registered by UAV are burdened with the influence of some shift/bias errors. In this article, methods of determination shift/bias error are presented. In the process of the digital aerial triangulation two solutions are applied. In the first method shift/bias error was determined together with the drift/bias error, elements of external orientation and coordinates of ground control points. In the second method shift/bias error was determined together with the elements of external orientation, coordinates of ground control points and drift/bias error equals 0. When two methods were compared the difference for shift/bias error is more than ±0.01 m for all terrain coordinates XYZ.
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Multi-muscle FES force control of the human arm for arbitrary goals.
Schearer, Eric M; Liao, Yu-Wei; Perreault, Eric J; Tresch, Matthew C; Memberg, William D; Kirsch, Robert F; Lynch, Kevin M
2014-05-01
We present a method for controlling a neuroprosthesis for a paralyzed human arm using functional electrical stimulation (FES) and characterize the errors of the controller. The subject has surgically implanted electrodes for stimulating muscles in her shoulder and arm. Using input/output data, a model mapping muscle stimulations to isometric endpoint forces measured at the subject's hand was identified. We inverted the model of this redundant and coupled multiple-input multiple-output system by minimizing muscle activations and used this inverse for feedforward control. The magnitude of the total root mean square error over a grid in the volume of achievable isometric endpoint force targets was 11% of the total range of achievable forces. Major sources of error were random error due to trial-to-trial variability and model bias due to nonstationary system properties. Because the muscles working collectively are the actuators of the skeletal system, the quantification of errors in force control guides designs of motion controllers for multi-joint, multi-muscle FES systems that can achieve arbitrary goals.
Many tests of significance: new methods for controlling type I errors.
Keselman, H J; Miller, Charles W; Holland, Burt
2011-12-01
There have been many discussions of how Type I errors should be controlled when many hypotheses are tested (e.g., all possible comparisons of means, correlations, proportions, the coefficients in hierarchical models, etc.). By and large, researchers have adopted familywise (FWER) control, though this practice certainly is not universal. Familywise control is intended to deal with the multiplicity issue of computing many tests of significance, yet such control is conservative--that is, less powerful--compared to per test/hypothesis control. The purpose of our article is to introduce the readership, particularly those readers familiar with issues related to controlling Type I errors when many tests of significance are computed, to newer methods that provide protection from the effects of multiple testing, yet are more powerful than familywise controlling methods. Specifically, we introduce a number of procedures that control the k-FWER. These methods--say, 2-FWER instead of 1-FWER (i.e., FWER)--are equivalent to specifying that the probability of 2 or more false rejections is controlled at .05, whereas FWER controls the probability of any (i.e., 1 or more) false rejections at .05. 2-FWER implicitly tolerates 1 false rejection and makes no explicit attempt to control the probability of its occurrence, unlike FWER, which tolerates no false rejections at all. More generally, k-FWER tolerates k - 1 false rejections, but controls the probability of k or more false rejections at α =.05. We demonstrate with two published data sets how more hypotheses can be rejected with k-FWER methods compared to FWER control.
A Comparison of Fuzzy Models in Similarity Assessment of Misregistered Area Class Maps
NASA Astrophysics Data System (ADS)
Brown, Scott
Spatial uncertainty refers to unknown error and vagueness in geographic data. It is relevant to land change and urban growth modelers, soil and biome scientists, geological surveyors and others, who must assess thematic maps for similarity, or categorical agreement. In this paper I build upon prior map comparison research, testing the effectiveness of similarity measures on misregistered data. Though several methods compare uncertain thematic maps, few methods have been tested on misregistration. My objective is to test five map comparison methods for sensitivity to misregistration, including sub-pixel errors in both position and rotation. Methods included four fuzzy categorical models: fuzzy kappa's model, fuzzy inference, cell aggregation, and the epsilon band. The fifth method used conventional crisp classification. I applied these methods to a case study map and simulated data in two sets: a test set with misregistration error, and a control set with equivalent uniform random error. For all five methods, I used raw accuracy or the kappa statistic to measure similarity. Rough-set epsilon bands report the most similarity increase in test maps relative to control data. Conversely, the fuzzy inference model reports a decrease in test map similarity.
Binocular optical axis parallelism detection precision analysis based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Ying, Jiaju; Liu, Bingqi
2018-02-01
According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.
Bacon, Dave; Flammia, Steven T
2009-09-18
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hunsberger, Randolph J
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less
Additive Runge-Kutta Schemes for Convection-Diffusion-Reaction Equations
NASA Technical Reports Server (NTRS)
Kennedy, Christopher A.; Carpenter, Mark H.
2001-01-01
Additive Runge-Kutta (ARK) methods are investigated for application to the spatially discretized one-dimensional convection-diffusion-reaction (CDR) equations. First, accuracy, stability, conservation, and dense output are considered for the general case when N different Runge-Kutta methods are grouped into a single composite method. Then, implicit-explicit, N = 2, additive Runge-Kutta ARK2 methods from third- to fifth-order are presented that allow for integration of stiff terms by an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge-Kutta (ESDIRK) method while the nonstiff terms are integrated with a traditional explicit Runge-Kutta method (ERK). Coupling error terms are of equal order to those of the elemental methods. Derived ARK2 methods have vanishing stability functions for very large values of the stiff scaled eigenvalue, z(exp [I]) goes to infinity, and retain high stability efficiency in the absence of stiffness, z(exp [I]) goes to zero. Extrapolation-type stage-value predictors are provided based on dense-output formulae. Optimized methods minimize both leading order ARK2 error terms and Butcher coefficient magnitudes as well as maximize conservation properties. Numerical tests of the new schemes on a CDR problem show negligible stiffness leakage and near classical order convergence rates. However, tests on three simple singular-perturbation problems reveal generally predictable order reduction. Error control is best managed with a PID-controller. While results for the fifth-order method are disappointing, both the new third- and fourth-order methods are at least as efficient as existing ARK2 methods while offering error control and stage-value predictors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.
1993-04-01
The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associatedmore » with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.« less
A novel method for routine quality assurance of volumetric-modulated arc therapy.
Wang, Qingxin; Dai, Jianrong; Zhang, Ke
2013-10-01
Volumetric-modulated arc therapy (VMAT) is delivered through synchronized variation of gantry angle, dose rate, and multileaf collimator (MLC) leaf positions. The delivery dynamic nature challenges the parameter setting accuracy of linac control system. The purpose of this study was to develop a novel method for routine quality assurance (QA) of VMAT linacs. ArcCheck is a detector array with diodes distributing in spiral pattern on cylindrical surface. Utilizing its features, a QA plan was designed to strictly test all varying parameters during VMAT delivery on an Elekta Synergy linac. In this plan, there are 24 control points. The gantry rotates clockwise from 181° to 179°. The dose rate, gantry speed, and MLC positions cover their ranges commonly used in clinic. The two borders of MLC-shaped field seat over two columns of diodes of ArcCheck when the gantry rotates to the angle specified by each control point. The ratio of dose rate between each of these diodes and the diode closest to the field center is a certain value and sensitive to the MLC positioning error of the leaf crossing the diode. Consequently, the positioning error can be determined by the ratio with the help of a relationship curve. The time when the gantry reaches the angle specified by each control point can be acquired from the virtual inclinometer that is a feature of ArcCheck. The gantry speed between two consecutive control points is then calculated. The aforementioned dose rate is calculated from an acm file that is generated during ArcCheck measurements. This file stores the data measured by each detector in 50 ms updates with each update in a separate row. A computer program was written in MATLAB language to process the data. The program output included MLC positioning errors and the dose rate at each control point as well as the gantry speed between control points. To evaluate this method, this plan was delivered for four consecutive weeks. The actual dose rate and gantry speed were compared with the QA plan specified. Additionally, leaf positioning errors were intentionally introduced to investigate the sensitivity of this method. The relationship curves were established for detecting MLC positioning errors during VMAT delivery. For four consecutive weeks measured, 98.4%, 94.9%, 89.2%, and 91.0% of the leaf positioning errors were within ± 0.5 mm, respectively. For the intentionally introduced leaf positioning systematic errors of -0.5 and +1 mm, the detected leaf positioning errors of 20 Y1 leaf were -0.48 ± 0.14 and 1.02 ± 0.26 mm, respectively. The actual gantry speed and dose rate closely followed the values specified in the VMAT QA plan. This method can assess the accuracy of MLC positions and the dose rate at each control point as well as the gantry speed between control points at the same time. It is efficient and suitable for routine quality assurance of VMAT.
Bier, Nathalie; Van Der Linden, Martial; Gagnon, Lise; Desrosiers, Johanne; Adam, Stephane; Louveaux, Stephanie; Saint-Mleux, Julie
2008-06-01
This study compared the efficacy of five learning methods in the acquisition of face-name associations in early dementia of Alzheimer type (AD). The contribution of error production and implicit memory to the efficacy of each method was also examined. Fifteen participants with early AD and 15 matched controls were exposed to five learning methods: spaced retrieval, vanishing cues, errorless, and two trial-and-error methods, one with explicit and one with implicit memory task instructions. Under each method, participants had to learn a list of five face-name associations, followed by free recall, cued recall and recognition. Delayed recall was also assessed. For AD, results showed that all methods were efficient but there were no significant differences between them. The number of errors produced during the learning phases varied between the five methods but did not influence learning. There were no significant differences between implicit and explicit memory task instructions on test performances. For the control group, there were no differences between the five methods. Finally, no significant correlations were found between the performance of the AD participants in free recall and their cognitive profile, but generally, the best performers had better remaining episodic memory. Also, case study analyses showed that spaced retrieval was the method for which the greatest number of participants (four) obtained results as good as the controls. This study suggests that the five methods are effective for new learning of face-name associations in AD. It appears that early AD patients can learn, even in the context of error production and explicit memory conditions.
Consensus of satellite cluster flight using an energy-matching optimal control method
NASA Astrophysics Data System (ADS)
Luo, Jianjun; Zhou, Liang; Zhang, Bo
2017-11-01
This paper presents an optimal control method for consensus of satellite cluster flight under a kind of energy matching condition. Firstly, the relation between energy matching and satellite periodically bounded relative motion is analyzed, and the satellite energy matching principle is applied to configure the initial conditions. Then, period-delayed errors are adopted as state variables to establish the period-delayed errors dynamics models of a single satellite and the cluster. Next a novel satellite cluster feedback control protocol with coupling gain is designed, so that the satellite cluster periodically bounded relative motion consensus problem (period-delayed errors state consensus problem) is transformed to the stability of a set of matrices with the same low dimension. Based on the consensus region theory in the research of multi-agent system consensus issues, the coupling gain can be obtained to satisfy the requirement of consensus region and decouple the satellite cluster information topology and the feedback control gain matrix, which can be determined by Linear quadratic regulator (LQR) optimal method. This method can realize the consensus of satellite cluster period-delayed errors, leading to the consistency of semi-major axes (SMA) and the energy-matching of satellite cluster. Then satellites can emerge the global coordinative cluster behavior. Finally the feasibility and effectiveness of the present energy-matching optimal consensus for satellite cluster flight is verified through numerical simulations.
NASA Astrophysics Data System (ADS)
Prasitmeeboon, Pitcha
Repetitive control (RC) is a control method that specifically aims to converge to zero tracking error of a control systems that execute a periodic command or have periodic disturbances of known period. It uses the error of one period back to adjust the command in the present period. In theory, RC can completely eliminate periodic disturbance effects. RC has applications in many fields such as high-precision manufacturing in robotics, computer disk drives, and active vibration isolation in spacecraft. The first topic treated in this dissertation develops several simple RC design methods that are somewhat analogous to PID controller design in classical control. From the early days of digital control, emulation methods were developed based on a Forward Rule, a Backward Rule, Tustin's Formula, a modification using prewarping, and a pole-zero mapping method. These allowed one to convert a candidate controller design to discrete time in a simple way. We investigate to what extent they can be used to simplify RC design. A particular design is developed from modification of the pole-zero mapping rules, which is simple and sheds light on the robustness of repetitive control designs. RC convergence requires less than 90 degree model phase error at all frequencies up to Nyquist. A zero-phase cutoff filter is normally used to robustify to high frequency model error when this limit is exceeded. The result is stabilization at the expense of failure to cancel errors above the cutoff. The second topic investigates a series of methods to use data to make real time updates of the frequency response model, allowing one to increase or eliminate the frequency cutoff. These include the use of a moving window employing a recursive discrete Fourier transform (DFT), and use of a real time projection algorithm from adaptive control for each frequency. The results can be used directly to make repetitive control corrections that cancel each error frequency, or they can be used to update a repetitive control FIR compensator. The aim is to reduce the final error level by using real time frequency response model updates to successively increase the cutoff frequency, each time creating the improved model needed to produce convergence zero error up to the higher cutoff. Non-minimum phase systems present a difficult design challenge to the sister field of Iterative Learning Control. The third topic investigates to what extent the same challenges appear in RC. One challenge is that the intrinsic non-minimum phase zero mapped from continuous time is close to the pole of repetitive controller at +1 creating behavior similar to pole-zero cancellation. The near pole-zero cancellation causes slow learning at DC and low frequencies. The Min-Max cost function over the learning rate is presented. The Min-Max can be reformulated as a Quadratically Constrained Linear Programming problem. This approach is shown to be an RC design approach that addresses the main challenge of non-minimum phase systems to have a reasonable learning rate at DC. Although it was illustrated that using the Min-Max objective improves learning at DC and low frequencies compared to other designs, the method requires model accuracy at high frequencies. In the real world, models usually have error at high frequencies. The fourth topic addresses how one can merge the quadratic penalty to the Min-Max cost function to increase robustness at high frequencies. The topic also considers limiting the Min-Max optimization to some frequencies interval and applying an FIR zero-phase low-pass filter to cutoff the learning for frequencies above that interval.
Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C
2010-01-01
Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Sone, Akihito; Kato, Takeyoshi; Shimakage, Toyonari; Suzuoki, Yasuo
A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). If a number of MGs are controlled to maintain the predetermined electricity demand including RE-based DGs as negative demand, they would contribute to supply-demand balancing of whole electric power system. For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on a demonstrative study on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization. Three forecast cases with different accuracy are compared. The main results are as follows. Even with no forecast error during every 30 min. as the ideal forecast method, the required capacity of NaS battery reaches about 40% of PVS capacity for mitigating the instantaneous forecast error within 30 min. The required capacity to compensate for the forecast error is doubled with the actual forecast method. The influence of forecast error can be reduced by adjusting the scheduled power output of controllable DGs according to the weather forecast. Besides, the required capacity can be reduced significantly if the error of balancing control in a MG is acceptable for a few percentages of periods, because the total periods of large forecast error is not so often.
Quality Control of an OSCE Using Generalizability Theory and Many-Faceted Rasch Measurement
ERIC Educational Resources Information Center
Iramaneerat, Cherdsak; Yudkowsky, Rachel; Myford, Carol M.; Downing, Steven M.
2008-01-01
An Objective Structured Clinical Examination (OSCE) is an effective method for evaluating competencies. However, scores obtained from an OSCE are vulnerable to many potential measurement errors that cases, items, or standardized patients (SPs) can introduce. Monitoring these sources of errors is an important quality control mechanism to ensure…
Verifiable Adaptive Control with Analytical Stability Margins by Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2010-01-01
This paper presents a verifiable model-reference adaptive control method based on an optimal control formulation for linear uncertain systems. A predictor model is formulated to enable a parameter estimation of the system parametric uncertainty. The adaptation is based on both the tracking error and predictor error. Using a singular perturbation argument, it can be shown that the closed-loop system tends to a linear time invariant model asymptotically under an assumption of fast adaptation. A stability margin analysis is given to estimate a lower bound of the time delay margin using a matrix measure method. Using this analytical method, the free design parameter n of the optimal control modification adaptive law can be determined to meet a specification of stability margin for verification purposes.
An Autonomous Satellite Time Synchronization System Using Remotely Disciplined VC-OCXOs.
Gu, Xiaobo; Chang, Qing; Glennon, Eamonn P; Xu, Baoda; Dempseter, Andrew G; Wang, Dun; Wu, Jiapeng
2015-07-23
An autonomous remote clock control system is proposed to provide time synchronization and frequency syntonization for satellite to satellite or ground to satellite time transfer, with the system comprising on-board voltage controlled oven controlled crystal oscillators (VC-OCXOs) that are disciplined to a remote master atomic clock or oscillator. The synchronization loop aims to provide autonomous operation over extended periods, be widely applicable to a variety of scenarios and robust. A new architecture comprising the use of frequency division duplex (FDD), synchronous time division (STDD) duplex and code division multiple access (CDMA) with a centralized topology is employed. This new design utilizes dual one-way ranging methods to precisely measure the clock error, adopts least square (LS) methods to predict the clock error and employs a third-order phase lock loop (PLL) to generate the voltage control signal. A general functional model for this system is proposed and the error sources and delays that affect the time synchronization are discussed. Related algorithms for estimating and correcting these errors are also proposed. The performance of the proposed system is simulated and guidance for selecting the clock is provided.
Synchronous Control Method and Realization of Automated Pharmacy Elevator
NASA Astrophysics Data System (ADS)
Liu, Xiang-Quan
Firstly, the control method of elevator's synchronous motion is provided, the synchronous control structure of double servo motor based on PMAC is accomplished. Secondly, synchronous control program of elevator is implemented by using PMAC linear interpolation motion model and position error compensation method. Finally, the PID parameters of servo motor were adjusted. The experiment proves the control method has high stability and reliability.
NASA Technical Reports Server (NTRS)
Green, Del L.; Walker, Eric L.; Everhart, Joel L.
2006-01-01
Minimization of uncertainty is essential to extend the usable range of the 15-psid Electronically Scanned Pressure [ESP) transducer measurements to the low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources inducing much of this uncertainty requires a well defined and controlled calibration method. Employing such a controlled calibration system, several studies were conducted that provide quantitative information detailing the required controls needed to minimize environmental and human induced error sources. Results of temperature, environmental pressure, over-pressurization, and set point randomization studies for the 15-psid transducers are presented along with a comparison of two regression methods using data acquired with both 0.36-psid and 15-psid transducers. Together these results provide insight into procedural and environmental controls required for long term high-accuracy pressure measurements near 0.01 psia in the hypersonic testing environment using 15-psid ESP transducers.
NASA Technical Reports Server (NTRS)
Green, Del L.; Walker, Eric L.; Everhart, Joel L.
2006-01-01
Minimization of uncertainty is essential to extend the usable range of the 15-psid Electronically Scanned Pressure (ESP) transducer measurements to the low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources inducing much of this uncertainty requires a well defined and controlled calibration method. Employing such a controlled calibration system, several studies were conducted that provide quantitative information detailing the required controls needed to minimize environmental and human induced error sources. Results of temperature, environmental pressure, over-pressurization, and set point randomization studies for the 15-psid transducers are presented along with a comparison of two regression methods using data acquired with both 0.36-psid and 15-psid transducers. Together these results provide insight into procedural and environmental controls required for long term high-accuracy pressure measurements near 0.01 psia in the hypersonic testing environment using 15-psid ESP transducers.
Design of an optimal preview controller for linear discrete-time descriptor systems with state delay
NASA Astrophysics Data System (ADS)
Cao, Mengjuan; Liao, Fucheng
2015-04-01
In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.
The importance of robust error control in data compression applications
NASA Technical Reports Server (NTRS)
Woolley, S. I.
1993-01-01
Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.
Ik Han, Seong; Lee, Jangmyung
2016-11-01
This paper presents finite-time sliding mode control (FSMC) with predefined constraints for the tracking error and sliding surface in order to obtain robust positioning of a robot manipulator with input nonlinearity due to an unknown deadzone and external disturbance. An assumed model feedforward FSMC was designed to avoid tedious identification procedures for the manipulator parameters and to obtain a fast response time. Two constraint switching control functions based on the tracking error and finite-time sliding surface were added to the FSMC to guarantee the predefined tracking performance despite the presence of an unknown deadzone and disturbance. The tracking error due to the deadzone and disturbance can be suppressed within the predefined error boundary simply by tuning the gain value of the constraint switching function and without the addition of an extra compensator. Therefore, the designed constraint controller has a simpler structure than conventional transformed error constraint methods and the sliding surface constraint scheme can also indirectly guarantee the tracking error constraint while being more stable than the tracking error constraint control. A simulation and experiment were performed on an articulated robot manipulator to validate the proposed control schemes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Hoyo, Javier Del; Choi, Heejoo; Burge, James H; Kim, Geon-Hee; Kim, Dae Wook
2017-06-20
The control of surface errors as a function of spatial frequency is critical during the fabrication of modern optical systems. A large-scale surface figure error is controlled by a guided removal process, such as computer-controlled optical surfacing. Smaller-scale surface errors are controlled by polishing process parameters. Surface errors of only a few millimeters may degrade the performance of an optical system, causing background noise from scattered light and reducing imaging contrast for large optical systems. Conventionally, the microsurface roughness is often given by the root mean square at a high spatial frequency range, with errors within a 0.5×0.5 mm local surface map with 500×500 pixels. This surface specification is not adequate to fully describe the characteristics for advanced optical systems. The process for controlling and minimizing mid- to high-spatial frequency surface errors with periods of up to ∼2-3 mm was investigated for many optical fabrication conditions using the measured surface power spectral density (PSD) of a finished Zerodur optical surface. Then, the surface PSD was systematically related to various fabrication process parameters, such as the grinding methods, polishing interface materials, and polishing compounds. The retraceable experimental polishing conditions and processes used to produce an optimal optical surface PSD are presented.
State estimation for autopilot control of small unmanned aerial vehicles in windy conditions
NASA Astrophysics Data System (ADS)
Poorman, David Paul
The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.
Applying integrals of motion to the numerical solution of differential equations
NASA Technical Reports Server (NTRS)
Vezewski, D. J.
1980-01-01
A method is developed for using the integrals of systems of nonlinear, ordinary, differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scalar or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.
Applying integrals of motion to the numerical solution of differential equations
NASA Technical Reports Server (NTRS)
Jezewski, D. J.
1979-01-01
A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.
NASA Astrophysics Data System (ADS)
Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong
2018-01-01
In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.
Some Simultaneous Inference Procedures for A Priori Contrasts.
ERIC Educational Resources Information Center
Convey, John J.
The testing of a priori contrasts, post hoc contrasts, and experimental error rates are discussed. Methods for controlling the experimental error rate for a set of a priori contrasts tested simultaneously have been developed by Dunnett, Dunn, Sidak, and Krishnaiah. Each of these methods is discussed and contrasted as to applicability, power, and…
Accurate acceleration of kinetic Monte Carlo simulations through the modification of rate constants.
Chatterjee, Abhijit; Voter, Arthur F
2010-05-21
We present a novel computational algorithm called the accelerated superbasin kinetic Monte Carlo (AS-KMC) method that enables a more efficient study of rare-event dynamics than the standard KMC method while maintaining control over the error. In AS-KMC, the rate constants for processes that are observed many times are lowered during the course of a simulation. As a result, rare processes are observed more frequently than in KMC and the time progresses faster. We first derive error estimates for AS-KMC when the rate constants are modified. These error estimates are next employed to develop a procedure for lowering process rates with control over the maximum error. Finally, numerical calculations are performed to demonstrate that the AS-KMC method captures the correct dynamics, while providing significant CPU savings over KMC in most cases. We show that the AS-KMC method can be employed with any KMC model, even when no time scale separation is present (although in such cases no computational speed-up is observed), without requiring the knowledge of various time scales present in the system.
Development and Characterization of a Low-Pressure Calibration System for Hypersonic Wind Tunnels
NASA Technical Reports Server (NTRS)
Green, Del L.; Everhart, Joel L.; Rhode, Matthew N.
2004-01-01
Minimization of uncertainty is essential for accurate ESP measurements at very low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources requires a well defined and controlled calibration method. A calibration system has been constructed and environmental control software developed to control experimentation to eliminate human induced error sources. The initial stability study of the calibration system shows a high degree of measurement accuracy and precision in temperature and pressure control. Control manometer drift and reference pressure instabilities induce uncertainty into the repeatability of voltage responses measured from the PSI System 8400 between calibrations. Methods of improving repeatability are possible through software programming and further experimentation.
A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions
NASA Astrophysics Data System (ADS)
Exl, Lukas
2017-12-01
An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.
NASA Astrophysics Data System (ADS)
Kim, Shin-Woo; Noh, Nam-Kyu; Lim, Gyu-Ho
2013-04-01
This study presents the introduction of retrospective optimal interpolation (ROI) and its application with Weather Research and Forecasting model (WRF). Song et al. (2009) suggested ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. The assimilation window of ROI algorithm is gradually increased, similar with that of the quasi-static variational assimilation (QSVA; Pires et al., 1996). Unlike QSVA method, however, ROI method assimilates the data at post analysis time using perturbation method (Verlaan and Heemink, 1997) without adjoint model. Song and Lim (2011) improved this method by incorporating eigen-decomposition and covariance inflation. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance which can concentrate ROI analyses on the error variances of governing eigenmodes by transforming the control variables into eigenspace. A total energy norm is used for the normalization of each control variables. In this study, ROI method is applied to WRF model with Observing System Simulation Experiment (OSSE) to validate the algorithm and to investigate the capability. Horizontal wind, pressure, potential temperature, and water vapor mixing ratio are used for control variables and observations. Firstly, 1-profile assimilation experiment is performed. Subsequently, OSSE's are performed using the virtual observing system which consists of synop, ship, and sonde data. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error with the assimilation by ROI. The characteristics and strength/weakness of ROI method are also investigated by conducting the experiments with 3D-Var (3-dimensional variational) method and 4D-Var (4-dimensional variational) method. In the initial time, ROI produces a larger forecast error than that of 4D-Var. However, the difference between the two experimental results is decreased gradually with time, and the ROI shows apparently better result (i.e., smaller forecast error) than that of 4D-Var after 9-hour forecast.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.
Peng, Zhouhua; Wang, Dan; Wang, Wei; Liu, Lu
2015-11-01
This paper investigates the containment control problem of networked autonomous underwater vehicles in the presence of model uncertainty and unknown ocean disturbances. A predictor-based neural dynamic surface control design method is presented to develop the distributed adaptive containment controllers, under which the trajectories of follower vehicles nearly converge to the dynamic convex hull spanned by multiple reference trajectories over a directed network. Prediction errors, rather than tracking errors, are used to update the neural adaptation laws, which are independent of the tracking error dynamics, resulting in two time-scales to govern the entire system. The stability property of the closed-loop network is established via Lyapunov analysis, and transient property is quantified in terms of L2 norms of the derivatives of neural weights, which are shown to be smaller than the classical neural dynamic surface control approach. Comparative studies are given to show the substantial improvements of the proposed new method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls
Chae, Jeongsook; Jin, Yong; Sung, Yunsick
2018-01-01
Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%. PMID:29324641
Aerial robot intelligent control method based on back-stepping
NASA Astrophysics Data System (ADS)
Zhou, Jian; Xue, Qian
2018-05-01
The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.
NASA Astrophysics Data System (ADS)
Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang
The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.
A mathematical theory of learning control for linear discrete multivariable systems
NASA Technical Reports Server (NTRS)
Phan, Minh; Longman, Richard W.
1988-01-01
When tracking control systems are used in repetitive operations such as robots in various manufacturing processes, the controller will make the same errors repeatedly. Here consideration is given to learning controllers that look at the tracking errors in each repetition of the process and adjust the control to decrease these errors in the next repetition. A general formalism is developed for learning control of discrete-time (time-varying or time-invariant) linear multivariable systems. Methods of specifying a desired trajectory (such that the trajectory can actually be performed by the discrete system) are discussed, and learning controllers are developed. Stability criteria are obtained which are relatively easy to use to insure convergence of the learning process, and proper gain settings are discussed in light of measurement noise and system uncertainties.
Maurer, Willi; Jones, Byron; Chen, Ying
2018-05-10
In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al. Copyright © 2018 John Wiley & Sons, Ltd.
Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
1990-01-01
A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.
NASA Technical Reports Server (NTRS)
Gordon, Steven C.
1993-01-01
Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.
Mahmoodabadi, M. J.; Taherkhorsandi, M.; Bagheri, A.
2014-01-01
An optimal robust state feedback tracking controller is introduced to control a biped robot. In the literature, the parameters of the controller are usually determined by a tedious trial and error process. To eliminate this process and design the parameters of the proposed controller, the multiobjective evolutionary algorithms, that is, the proposed method, modified NSGAII, Sigma method, and MATLAB's Toolbox MOGA, are employed in this study. Among the used evolutionary optimization algorithms to design the controller for biped robots, the proposed method operates better in the aspect of designing the controller since it provides ample opportunities for designers to choose the most appropriate point based upon the design criteria. Three points are chosen from the nondominated solutions of the obtained Pareto front based on two conflicting objective functions, that is, the normalized summation of angle errors and normalized summation of control effort. Obtained results elucidate the efficiency of the proposed controller in order to control a biped robot. PMID:24616619
Measurement uncertainty evaluation of conicity error inspected on CMM
NASA Astrophysics Data System (ADS)
Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang
2016-01-01
The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.
Prevalence of refraction errors and color blindness in heavy vehicle drivers
Erdoğan, Haydar; Özdemir, Levent; Arslan, Seher; Çetin, Ilhan; Özeç, Ayşe Vural; Çetinkaya, Selma; Sümer, Haldun
2011-01-01
AIM To investigate the frequency of eye disorders in heavy vehicle drivers. METHODS A cross-sectional type study was conducted between November 2004 and September 2006 in 200 driver and 200 non-driver persons. A complete ophthalmologic examination was performed, including visual acuity, and dilated examination of the posterior segment. We used the auto refractometer for determining refractive errors. RESULTS According to eye examination results, the prevalence of the refractive error was 21.5% and 31.3% in study and control groups respectively (P<0.05). The most common type of refraction error in the study group was myopic astigmatism (8.3%) while in the control group simple myopia (12.8%). Prevalence of dyschromatopsia in the rivers, control group and total group was 2.2%, 2.8% and 2.6% respectively. CONCLUSION A considerably high number of drivers are in lack of optimal visual acuity. Refraction errors in drivers may impair the traffic security. PMID:22553671
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
NASA Astrophysics Data System (ADS)
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.
Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan
2018-04-01
In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.
Benchmarking Distance Control and Virtual Drilling for Lateral Skull Base Surgery.
Voormolen, Eduard H J; Diederen, Sander; van Stralen, Marijn; Woerdeman, Peter A; Noordmans, Herke Jan; Viergever, Max A; Regli, Luca; Robe, Pierre A; Berkelbach van der Sprenkel, Jan Willem
2018-01-01
Novel audiovisual feedback methods were developed to improve image guidance during skull base surgery by providing audiovisual warnings when the drill tip enters a protective perimeter set at a distance around anatomic structures ("distance control") and visualizing bone drilling ("virtual drilling"). To benchmark the drill damage risk reduction provided by distance control, to quantify the accuracy of virtual drilling, and to investigate whether the proposed feedback methods are clinically feasible. In a simulated surgical scenario using human cadavers, 12 unexperienced users (medical students) drilled 12 mastoidectomies. Users were divided into a control group using standard image guidance and 3 groups using distance control with protective perimeters of 1, 2, or 3 mm. Damage to critical structures (sigmoid sinus, semicircular canals, facial nerve) was assessed. Neurosurgeons performed another 6 mastoidectomy/trans-labyrinthine and retro-labyrinthine approaches. Virtual errors as compared with real postoperative drill cavities were calculated. In a clinical setting, 3 patients received lateral skull base surgery with the proposed feedback methods. Users drilling with distance control protective perimeters of 3 mm did not damage structures, whereas the groups using smaller protective perimeters and the control group injured structures. Virtual drilling maximum cavity underestimations and overestimations were 2.8 ± 0.1 and 3.3 ± 0.4 mm, respectively. Feedback methods functioned properly in the clinical setting. Distance control reduced the risks of drill damage proportional to the protective perimeter distance. Errors in virtual drilling reflect spatial errors of the image guidance system. These feedback methods are clinically feasible. Copyright © 2017 Elsevier Inc. All rights reserved.
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
ERIC Educational Resources Information Center
Saavedra, Pedro; Kuchak, JoAnn
An error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications was developed, based on interviews conducted with a quality control sample of 1,791 students during 1978-1979. The model was designed to identify corrective methods appropriate for different types of…
An Autonomous Satellite Time Synchronization System Using Remotely Disciplined VC-OCXOs
Gu, Xiaobo; Chang, Qing; Glennon, Eamonn P.; Xu, Baoda; Dempseter, Andrew G.; Wang, Dun; Wu, Jiapeng
2015-01-01
An autonomous remote clock control system is proposed to provide time synchronization and frequency syntonization for satellite to satellite or ground to satellite time transfer, with the system comprising on-board voltage controlled oven controlled crystal oscillators (VC-OCXOs) that are disciplined to a remote master atomic clock or oscillator. The synchronization loop aims to provide autonomous operation over extended periods, be widely applicable to a variety of scenarios and robust. A new architecture comprising the use of frequency division duplex (FDD), synchronous time division (STDD) duplex and code division multiple access (CDMA) with a centralized topology is employed. This new design utilizes dual one-way ranging methods to precisely measure the clock error, adopts least square (LS) methods to predict the clock error and employs a third-order phase lock loop (PLL) to generate the voltage control signal. A general functional model for this system is proposed and the error sources and delays that affect the time synchronization are discussed. Related algorithms for estimating and correcting these errors are also proposed. The performance of the proposed system is simulated and guidance for selecting the clock is provided. PMID:26213929
Controlled sound field with a dual layer loudspeaker array
NASA Astrophysics Data System (ADS)
Shin, Mincheol; Fazi, Filippo M.; Nelson, Philip A.; Hirono, Fabio C.
2014-08-01
Controlled sound interference has been extensively investigated using a prototype dual layer loudspeaker array comprised of 16 loudspeakers. Results are presented for measures of array performance such as input signal power, directivity of sound radiation and accuracy of sound reproduction resulting from the application of conventional control methods such as minimization of error in mean squared pressure, maximization of energy difference and minimization of weighted pressure error and energy. Procedures for selecting the tuning parameters have also been introduced. With these conventional concepts aimed at the production of acoustically bright and dark zones, all the control methods used require a trade-off between radiation directivity and reproduction accuracy in the bright zone. An alternative solution is proposed which can achieve better performance based on the measures presented simultaneously by inserting a low priority zone named as the “gray” zone. This involves the weighted minimization of mean-squared errors in both bright and dark zones together with the gray zone in which the minimization error is given less importance. This results in the production of directional bright zone in which the accuracy of sound reproduction is maintained with less required input power. The results of simulations and experiments are shown to be in excellent agreement.
NASA Astrophysics Data System (ADS)
Altug, Erdinc
Our work proposes a vision-based stabilization and output tracking control method for a model helicopter. This is a part of our effort to produce a rotorcraft based autonomous Unmanned Aerial Vehicle (UAV). Due to the desired maneuvering ability, a four-rotor helicopter has been chosen as the testbed. On previous research on flying vehicles, vision is usually used as a secondary sensor. Unlike previous research, our goal is to use visual feedback as the main sensor, which is not only responsible for detecting where the ground objects are but also for helicopter localization. A novel two-camera method has been introduced for estimating the full six degrees of freedom (DOF) pose of the helicopter. This two-camera system consists of a pan-tilt ground camera and an onboard camera. The pose estimation algorithm is compared through simulation to other methods, such as four-point, and stereo method and is shown to be less sensitive to feature detection errors. Helicopters are highly unstable flying vehicles; although this is good for agility, it makes the control harder. To build an autonomous helicopter, two methods of control are studied---one using a series of mode-based, feedback linearizing controllers and the other using a back-stepping control law. Various simulations with 2D and 3D models demonstrate the implementation of these controllers. We also show global convergence of the 3D quadrotor controller even with large calibration errors or presence of large errors on the image plane. Finally, we present initial flight experiments where the proposed pose estimation algorithm and non-linear control techniques have been implemented on a remote-controlled helicopter. The helicopter was restricted with a tether to vertical, yaw motions and limited x and y translations.
Designing Measurement Studies under Budget Constraints: Controlling Error of Measurement and Power.
ERIC Educational Resources Information Center
Marcoulides, George A.
1995-01-01
A methodology is presented for minimizing the mean error variance-covariance component in studies with resource constraints. The method is illustrated using a one-facet multivariate design. Extensions to other designs are discussed. (SLD)
Rekaya, Romdhane; Smith, Shannon; Hay, El Hamidi; Farhat, Nourhene; Aggrey, Samuel E
2016-01-01
Errors in the binary status of some response traits are frequent in human, animal, and plant applications. These error rates tend to differ between cases and controls because diagnostic and screening tests have different sensitivity and specificity. This increases the inaccuracies of classifying individuals into correct groups, giving rise to both false-positive and false-negative cases. The analysis of these noisy binary responses due to misclassification will undoubtedly reduce the statistical power of genome-wide association studies (GWAS). A threshold model that accommodates varying diagnostic errors between cases and controls was investigated. A simulation study was carried out where several binary data sets (case-control) were generated with varying effects for the most influential single nucleotide polymorphisms (SNPs) and different diagnostic error rate for cases and controls. Each simulated data set consisted of 2000 individuals. Ignoring misclassification resulted in biased estimates of true influential SNP effects and inflated estimates for true noninfluential markers. A substantial reduction in bias and increase in accuracy ranging from 12% to 32% was observed when the misclassification procedure was invoked. In fact, the majority of influential SNPs that were not identified using the noisy data were captured using the proposed method. Additionally, truly misclassified binary records were identified with high probability using the proposed method. The superiority of the proposed method was maintained across different simulation parameters (misclassification rates and odds ratios) attesting to its robustness.
Systems and methods for compensating for electrical converter nonlinearities
Perisic, Milun; Ransom, Ray M.; Kajouke, Lateef A.
2013-06-18
Systems and methods are provided for delivering energy from an input interface to an output interface. An electrical system includes an input interface, an output interface, an energy conversion module coupled between the input interface and the output interface, and a control module. The control module determines a duty cycle control value for operating the energy conversion module to produce a desired voltage at the output interface. The control module determines an input power error at the input interface and adjusts the duty cycle control value in a manner that is influenced by the input power error, resulting in a compensated duty cycle control value. The control module operates switching elements of the energy conversion module to deliver energy to the output interface with a duty cycle that is influenced by the compensated duty cycle control value.
NASA Astrophysics Data System (ADS)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Li, Haitao; Ning, Xin; Li, Wenzhuo
2017-03-01
In order to improve the reliability and reduce power consumption of the high speed BLDC motor system, this paper presents a model free adaptive control (MFAC) based position sensorless drive with only a dc-link current sensor. The initial commutation points are obtained by detecting the phase of EMF zero-crossing point and then delaying 30 electrical degrees. According to the commutation error caused by the low pass filter (LPF) and other factors, the relationship between commutation error angle and dc-link current is analyzed, a corresponding MFAC based control method is proposed, and the commutation error can be corrected by the controller in real time. Both the simulation and experimental results show that the proposed correction method can achieve ideal commutation effect within the entire operating speed range. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Adaptive feedforward control of non-minimum phase structural systems
NASA Astrophysics Data System (ADS)
Vipperman, J. S.; Burdisso, R. A.
1995-06-01
Adaptive feedforward control algorithms have been effectively applied to stationary disturbance rejection. For structural systems, the ideal feedforward compensator is a recursive filter which is a function of the transfer functions between the disturbance and control inputs and the error sensor output. Unfortunately, most control configurations result in a non-minimum phase control path; even a collocated control actuator and error sensor will not necessarily produce a minimum phase control path in the discrete domain. Therefore, the common practice is to choose a suitable approximation of the ideal compensator. In particular, all-zero finite impulse response (FIR) filters are desirable because of their inherent stability for adaptive control approaches. However, for highly resonant systems, large order filters are required for broadband applications. In this work, a control configuration is investigated for controlling non-minimum phase lightly damped structural systems. The control approach uses low order FIR filters as feedforward compensators in a configuration that has one more control actuator than error sensors. The performance of the controller was experimentally evaluated on a simply supported plate under white noise excitation for a two-input, one-output (2I1O) system. The results show excellent error signal reduction, attesting to the effectiveness of the method.
Integrator Windup Protection-Techniques and a STOVL Aircraft Engine Controller Application
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.; Narayanaswamy, S.
1997-01-01
Integrators are included in the feedback loop of a control system to eliminate the steady state errors in the commanded variables. The integrator windup problem arises if the control actuators encounter operational limits before the steady state errors are driven to zero by the integrator. The typical effects of windup are large system oscillations, high steady state error, and a delayed system response following the windup. In this study, methods to prevent the integrator windup are examined to provide Integrator Windup Protection (IW) for an engine controller of a Short Take-Off and Vertical Landing (STOVL) aircraft. An unified performance index is defined to optimize the performance of the Conventional Anti-Windup (CAW) and the Modified Anti-Windup (MAW) methods. A modified Genetic Algorithm search procedure with stochastic parameter encoding is implemented to obtain the optimal parameters of the CAW scheme. The advantages and drawbacks of the CAW and MAW techniques are discussed and recommendations are made for the choice of the IWP scheme, given some characteristics of the system.
An extended sequential goodness-of-fit multiple testing method for discrete data.
Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo
2017-10-01
The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.
A constrained-gradient method to control divergence errors in numerical MHD
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2016-10-01
In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.
Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data
Zhao, Shanshan
2014-01-01
Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
An error bound for a discrete reduced order model of a linear multivariable system
NASA Technical Reports Server (NTRS)
Al-Saggaf, Ubaid M.; Franklin, Gene F.
1987-01-01
The design of feasible controllers for high dimension multivariable systems can be greatly aided by a method of model reduction. In order for the design based on the order reduction to include a guarantee of stability, it is sufficient to have a bound on the model error. Previous work has provided such a bound for continuous-time systems for algorithms based on balancing. In this note an L-infinity bound is derived for model error for a method of order reduction of discrete linear multivariable systems based on balancing.
Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory
2015-01-01
Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.
Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R
2016-03-01
This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.
Design of a self-adaptive fuzzy PID controller for piezoelectric ceramics micro-displacement system
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Zhong, Yuning; Xu, Zhongbao
2008-12-01
In order to improve control precision of the piezoelectric ceramics (PZT) micro-displacement system, a self-adaptive fuzzy Proportional Integration Differential (PID) controller is designed based on the traditional digital PID controller combining with fuzzy control. The arithmetic gives a fuzzy control rule table with the fuzzy control rule and fuzzy reasoning, through this table, the PID parameters can be adjusted online in real time control. Furthermore, the automatic selective control is achieved according to the change of the error. The controller combines the good dynamic capability of the fuzzy control and the high stable precision of the PID control, adopts the method of using fuzzy control and PID control in different segments of time. In the initial and middle stage of the transition process of system, that is, when the error is larger than the value, fuzzy control is used to adjust control variable. It makes full use of the fast response of the fuzzy control. And when the error is smaller than the value, the system is about to be in the steady state, PID control is adopted to eliminate static error. The problems of PZT existing in the field of precise positioning are overcome. The results of the experiments prove that the project is correct and practicable.
Adaptation to sensory-motor reflex perturbations is blind to the source of errors.
Hudson, Todd E; Landy, Michael S
2012-01-06
In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.
NASA Astrophysics Data System (ADS)
Ma, Chen-xi; Ding, Guo-qing
2017-10-01
Simple harmonic waves and synthesized simple harmonic waves are widely used in the test of instruments. However, because of the errors caused by clearance of gear and time-delay error of FPGA, it is difficult to control servo electric cylinder in precise simple harmonic motion under high speed, high frequency and large load conditions. To solve the problem, a method of error compensation is proposed in this paper. In the method, a displacement sensor is fitted on the piston rod of the electric cylinder. By using the displacement sensor, the real-time displacement of the piston rod is obtained and fed back to the input of servo motor, then a closed loop control is realized. There is compensation of pulses in the next period of the synthetic waves. This paper uses FPGA as the processing core. The software mainly comprises a waveform generator, an Ethernet module, a memory module, a pulse generator, a pulse selector, a protection module, an error compensation module. A durability of shock absorbers is used as the testing platform. The durability mainly comprises a single electric cylinder, a servo motor for driving the electric cylinder, and the servo motor driver.
Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L
2017-02-06
Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.
Real-time optimal guidance for orbital maneuvering.
NASA Technical Reports Server (NTRS)
Cohen, A. O.; Brown, K. R.
1973-01-01
A new formulation for soft-constraint trajectory optimization is presented as a real-time optimal feedback guidance method for multiburn orbital maneuvers. Control is always chosen to minimize burn time plus a quadratic penalty for end condition errors, weighted so that early in the mission (when controllability is greatest) terminal errors are held negligible. Eventually, as controllability diminishes, the method partially relaxes but effectively still compensates perturbations in whatever subspace remains controllable. Although the soft-constraint concept is well-known in optimal control, the present formulation is novel in addressing the loss of controllability inherent in multiple burn orbital maneuvers. Moreover the necessary conditions usually obtained from a Bolza formulation are modified in this case so that the fully hard constraint formulation is a numerically well behaved subcase. As a result convergence properties have been greatly improved.
Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity
NASA Technical Reports Server (NTRS)
Lin, J. Y.; Mingori, D. L.
1992-01-01
We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.
Errors in patient specimen collection: application of statistical process control.
Dzik, Walter Sunny; Beckman, Neil; Selleng, Kathleen; Heddle, Nancy; Szczepiorkowski, Zbigniew; Wendel, Silvano; Murphy, Michael
2008-10-01
Errors in the collection and labeling of blood samples for pretransfusion testing increase the risk of transfusion-associated patient morbidity and mortality. Statistical process control (SPC) is a recognized method to monitor the performance of a critical process. An easy-to-use SPC method was tested to determine its feasibility as a tool for monitoring quality in transfusion medicine. SPC control charts were adapted to a spreadsheet presentation. Data tabulating the frequency of mislabeled and miscollected blood samples from 10 hospitals in five countries from 2004 to 2006 were used to demonstrate the method. Control charts were produced to monitor process stability. The participating hospitals found the SPC spreadsheet very suitable to monitor the performance of the sample labeling and collection and applied SPC charts to suit their specific needs. One hospital monitored subcategories of sample error in detail. A large hospital monitored the number of wrong-blood-in-tube (WBIT) events. Four smaller-sized facilities, each following the same policy for sample collection, combined their data on WBIT samples into a single control chart. One hospital used the control chart to monitor the effect of an educational intervention. A simple SPC method is described that can monitor the process of sample collection and labeling in any hospital. SPC could be applied to other critical steps in the transfusion processes as a tool for biovigilance and could be used to develop regional or national performance standards for pretransfusion sample collection. A link is provided to download the spreadsheet for free.
A wavefront compensation approach to segmented mirror figure control
NASA Technical Reports Server (NTRS)
Redding, David; Breckenridge, Bill; Sevaston, George; Lau, Ken
1991-01-01
We consider the 'figure-control' problem for a spaceborn sub-millimeter wave telescope, the Precision Segmented Reflector Project Focus Mission Telescope. We show that performance of any figure control system is subject to limits on the controllability and observability of the quality of the wavefront. We present a wavefront-compensation method for the Focus Mission Telescope which uses mirror-figure sensors and three-axis segment actuator to directly minimize wavefront errors due to segment position errors. This approach shows significantly better performance when compared with a panel-state-compensation approach.
Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H
2015-11-30
We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.
Robust approximation-free prescribed performance control for nonlinear systems and its application
NASA Astrophysics Data System (ADS)
Sun, Ruisheng; Na, Jing; Zhu, Bin
2018-02-01
This paper presents a robust prescribed performance control approach and its application to nonlinear tail-controlled missile systems with unknown dynamics and uncertainties. The idea of prescribed performance function (PPF) is incorporated into the control design, such that both the steady-state and transient control performance can be strictly guaranteed. Unlike conventional PPF-based control methods, we further tailor a recently proposed systematic control design procedure (i.e. approximation-free control) using the transformed tracking error dynamics, which provides a proportional-like control action. Hence, the function approximators (e.g. neural networks, fuzzy systems) that are widely used to address the unknown nonlinearities in the nonlinear control designs are not needed. The proposed control design leads to a robust yet simplified function approximation-free control for nonlinear systems. The closed-loop system stability and the control error convergence are all rigorously proved. Finally, comparative simulations are conducted based on nonlinear missile systems to validate the improved response and the robustness of the proposed control method.
Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff
2016-01-01
We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.
Errors as a Means of Reducing Impulsive Food Choice.
Sellitto, Manuela; di Pellegrino, Giuseppe
2016-06-05
Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.
Errors as a Means of Reducing Impulsive Food Choice
Sellitto, Manuela; di Pellegrino, Giuseppe
2016-01-01
Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities. PMID:27341281
A Comparison of Exposure Control Procedures in CATS Using the GPC Model
ERIC Educational Resources Information Center
Leroux, Audrey J.; Dodd, Barbara G.
2016-01-01
The current study compares the progressive-restricted standard error (PR-SE) exposure control method with the Sympson-Hetter, randomesque, and no exposure control (maximum information) procedures using the generalized partial credit model with fixed- and variable-length CATs and two item pools. The PR-SE method administered the entire item pool…
Mirinejad, Hossein; Gaweda, Adam E; Brier, Michael E; Zurada, Jacek M; Inanc, Tamer
2017-09-01
Anemia is a common comorbidity in patients with chronic kidney disease (CKD) and is frequently associated with decreased physical component of quality of life, as well as adverse cardiovascular events. Current treatment methods for renal anemia are mostly population-based approaches treating individual patients with a one-size-fits-all model. However, FDA recommendations stipulate individualized anemia treatment with precise control of the hemoglobin concentration and minimal drug utilization. In accordance with these recommendations, this work presents an individualized drug dosing approach to anemia management by leveraging the theory of optimal control. A Multiple Receding Horizon Control (MRHC) approach based on the RBF-Galerkin optimization method is proposed for individualized anemia management in CKD patients. Recently developed by the authors, the RBF-Galerkin method uses the radial basis function approximation along with the Galerkin error projection to solve constrained optimal control problems numerically. The proposed approach is applied to generate optimal dosing recommendations for individual patients. Performance of the proposed approach (MRHC) is compared in silico to that of a population-based anemia management protocol and an individualized multiple model predictive control method for two case scenarios: hemoglobin measurement with and without observational errors. In silico comparison indicates that hemoglobin concentration with MRHC method has less variation among the methods, especially in presence of measurement errors. In addition, the average achieved hemoglobin level from the MRHC is significantly closer to the target hemoglobin than that of the other two methods, according to the analysis of variance (ANOVA) statistical test. Furthermore, drug dosages recommended by the MRHC are more stable and accurate and reach the steady-state value notably faster than those generated by the other two methods. The proposed method is highly efficient for the control of hemoglobin level, yet provides accurate dosage adjustments in the treatment of CKD anemia. Copyright © 2017 Elsevier B.V. All rights reserved.
A modified error correction protocol for CCITT signalling system no. 7 on satellite links
NASA Astrophysics Data System (ADS)
Kreuer, Dieter; Quernheim, Ulrich
1991-10-01
Comite Consultatif International des Telegraphe et Telephone (CCITT) Signalling System No. 7 (SS7) provides a level 2 error correction protocol particularly suited for links with propagation delays higher than 15 ms. Not being originally designed for satellite links, however, the so called Preventive Cyclic Retransmission (PCR) Method only performs well on satellite channels when traffic is low. A modified level 2 error control protocol, termed Fix Delay Retransmission (FDR) method is suggested which performs better at high loads, thus providing a more efficient use of the limited carrier capacity. Both the PCR and the FDR methods are investigated by means of simulation and results concerning throughput, queueing delay, and system delay, respectively. The FDR method exhibits higher capacity and shorter delay than the PCR method.
Performance analysis of different tuning rules for an isothermal CSTR using integrated EPC and SPC
NASA Astrophysics Data System (ADS)
Roslan, A. H.; Karim, S. F. Abd; Hamzah, N.
2018-03-01
This paper demonstrates the integration of Engineering Process Control (EPC) and Statistical Process Control (SPC) for the control of product concentration of an isothermal CSTR. The objectives of this study are to evaluate the performance of Ziegler-Nichols (Z-N), Direct Synthesis, (DS) and Internal Model Control (IMC) tuning methods and determine the most effective method for this process. The simulation model was obtained from past literature and re-constructed using SIMULINK MATLAB to evaluate the process response. Additionally, the process stability, capability and normality were analyzed using Process Capability Sixpack reports in Minitab. Based on the results, DS displays the best response for having the smallest rise time, settling time, overshoot, undershoot, Integral Time Absolute Error (ITAE) and Integral Square Error (ISE). Also, based on statistical analysis, DS yields as the best tuning method as it exhibits the highest process stability and capability.
High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis
Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher
2015-01-01
Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
Wen, Shiping; Zeng, Zhigang; Chen, Michael Z Q; Huang, Tingwen
2017-10-01
This paper addresses the issue of synchronization of switched delayed neural networks with communication delays via event-triggered control. For synchronizing coupled switched neural networks, we propose a novel event-triggered control law which could greatly reduce the number of control updates for synchronization tasks of coupled switched neural networks involving embedded microprocessors with limited on-board resources. The control signals are driven by properly defined events, which depend on the measurement errors and current-sampled states. By using a delay system method, a novel model of synchronization error system with delays is proposed with the communication delays and event-triggered control in the unified framework for coupled switched neural networks. The criteria are derived for the event-triggered synchronization analysis and control synthesis of switched neural networks via the Lyapunov-Krasovskii functional method and free weighting matrix approach. A numerical example is elaborated on to illustrate the effectiveness of the derived results.
Nonlinear gearshifts control of dual-clutch transmissions during inertia phase.
Hu, Yunfeng; Tian, Lu; Gao, Bingzhao; Chen, Hong
2014-07-01
In this paper, a model-based nonlinear gearshift controller is designed by the backstepping method to improve the shift quality of vehicles with a dual-clutch transmission (DCT). Considering easy-implementation, the controller is rearranged into a concise structure which contains a feedforward control and a feedback control. Then, robustness of the closed-loop error system is discussed in the framework of the input to state stability (ISS) theory, where model uncertainties are considered as the additive disturbance inputs. Furthermore, due to the application of the backstepping method, the closed-loop error system is ordered as a linear system. Using the linear system theory, a guideline for selecting the controller parameters is deduced which could reduce the workload of parameters tuning. Finally, simulation results and Hardware in the Loop (HiL) simulation are presented to validate the effectiveness of the designed controller. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Interference elimination in digital controllers of automation systems of oil and gas complex
NASA Astrophysics Data System (ADS)
Solomentsev, K. Yu; Fugarov, D. D.; Purchina, O. A.; Poluyan, A. Y.; Nesterchuk, V. V.; Petrenkova, S. B.
2018-05-01
The given article considers the problems arising in the process of digital governors development for the systems of automatic control. In the case of interference, and also in case of high frequency of digitization, digital differentiation gives a big error. The problem is that the derivative is calculated as the difference of two close variables. The method of differentiation is offered to reduce this error, when there is a case of averaging the difference quotient of the series of meanings. The structure chart for the implementation of this differentiation method is offered in the case of governors construction.
How is the weather? Forecasting inpatient glycemic control
Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M
2017-01-01
Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125
NASA Astrophysics Data System (ADS)
Rivière, G.; Hua, B. L.
2004-10-01
A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.
Finite-time control for nonlinear spacecraft attitude based on terminal sliding mode technique.
Song, Zhankui; Li, Hongxing; Sun, Kaibiao
2014-01-01
In this paper, a fast terminal sliding mode control (FTSMC) scheme with double closed loops is proposed for the spacecraft attitude control. The FTSMC laws are included both in an inner control loop and an outer control loop. Firstly, a fast terminal sliding surface (FTSS) is constructed, which can drive the inner loop tracking-error and the outer loop tracking-error on the FTSS to converge to zero in finite time. Secondly, FTSMC strategy is designed by using Lyaponov's method for ensuring the occurrence of the sliding motion in finite time, which can hold the character of fast transient response and improve the tracking accuracy. It is proved that FTSMC can guarantee the convergence of tracking-error in both approaching and sliding mode surface. Finally, simulation results demonstrate the effectiveness of the proposed control scheme. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Error amplification to promote motor learning and motivation in therapy robotics.
Shirzad, Navid; Van der Loos, H F Machiel
2012-01-01
To study the effects of different feedback error amplification methods on a subject's upper-limb motor learning and affect during a point-to-point reaching exercise, we developed a real-time controller for a robotic manipulandum. The reaching environment was visually distorted by implementing a thirty degrees rotation between the coordinate systems of the robot's end-effector and the visual display. Feedback error amplification was provided to subjects as they trained to learn reaching within the visually rotated environment. Error amplification was provided either visually or through both haptic and visual means, each method with two different amplification gains. Subjects' performance (i.e., trajectory error) and self-reports to a questionnaire were used to study the speed and amount of adaptation promoted by each error amplification method and subjects' emotional changes. We found that providing haptic and visual feedback promotes faster adaptation to the distortion and increases subjects' satisfaction with the task, leading to a higher level of attentiveness during the exercise. This finding can be used to design a novel exercise regimen, where alternating between error amplification methods is used to both increase a subject's motor learning and maintain a minimum level of motivational engagement in the exercise. In future experiments, we will test whether such exercise methods will lead to a faster learning time and greater motivation to pursue a therapy exercise regimen.
The influence of monetary punishment on cognitive control in abstinent cocaine-users*
Hester, Robert; Bell, Ryan P.; Foxe, John J.; Garavan, Hugh
2013-01-01
Background Dependent drug users show a diminished neural response to punishment, in both limbic and cortical regions, though it remains unclear how such changes influence cognitive processes critical to addiction. To assess this relationship, we examined the influence of monetary punishment on inhibitory control and adaptive post-error behaviour in abstinent cocaine dependent (CD) participants. Methods 15 abstinent CD and 15 matched control participants performed a Go/No-go response inhibition task, which administered monetary fines for failed response inhibition, during collection of fMRI data. Results CD participants showed reduced inhibitory control and significantly less adaptive post-error slowing in response to punishment, when compared to controls. The diminished behavioural punishment sensitivity shown by CD participants was associated with significant hypoactive error-related BOLD responses in the dorsal anterior cingulate cortex (ACC), right insula and right prefrontal regions. Specifically, CD participants’ error-related response in these regions was not modulated by the presence of punishment, whereas control participants’ response showed a significant BOLD increase during punished errors. Conclusions CD participants showed a blunted response to failed control (errors) that was not modulated by punishment. Consistent with previous findings of reduced sensitivity to monetary loss in cocaine users, we further demonstrate that such insensitivity is associated with an inability to increase cognitive control in the face of negative consequences, a core symptom of addiction. The pattern of deficits in the CD group may have implications for interventions that attempt to improve cognitive control in drug dependent groups via positive/negative incentives. PMID:23791040
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Semenov, Z. V.; Labusov, V. A.
2017-11-01
Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.
A high precision dual feedback discrete control system designed for satellite trajectory simulator
NASA Astrophysics Data System (ADS)
Liu, Ximin; Liu, Liren; Sun, Jianfeng; Xu, Nan
2005-08-01
Cooperating with the free-space laser communication terminals, the satellite trajectory simulator is used to test the acquisition, pointing, tracking and communicating performances of the terminals. So the satellite trajectory simulator plays an important role in terminal ground test and verification. Using the double-prism, Sun etc in our group designed a satellite trajectory simulator. In this paper, a high precision dual feedback discrete control system designed for the simulator is given and a digital fabrication of the simulator is made correspondingly. In the dual feedback discrete control system, Proportional- Integral controller is used in velocity feedback loop and Proportional- Integral- Derivative controller is used in position feedback loop. In the controller design, simplex method is introduced and an improvement to the method is made. According to the transfer function of the control system in Z domain, the digital fabrication of the simulator is given when it is exposed to mechanism error and moment disturbance. Typically, when the mechanism error is 100urad, the residual standard error of pitching angle, azimuth angle, x-coordinate position and y-coordinate position are 0.49urad, 6.12urad, 4.56urad, 4.09urad respectively. When the moment disturbance is 0.1rad, the residual standard error of pitching angle, azimuth angle, x-coordinate position and y-coordinate position are 0.26urad, 0.22urad, 0.16urad, 0.15urad respectively. The digital fabrication results demonstrate that the dual feedback discrete control system designed for the simulator can achieve the anticipated high precision performance.
Caprihan, A; Pearlson, G D; Calhoun, V D
2008-08-15
Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
An in-situ measuring method for planar straightness error
NASA Astrophysics Data System (ADS)
Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie
2018-01-01
According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.
Virtual Passive Controller for Robot Systems Using Joint Torque Sensors
NASA Technical Reports Server (NTRS)
Aldridge, Hal A.; Juang, Jer-Nan
1997-01-01
This paper presents a control method based on virtual passive dynamic control that will stabilize a robot manipulator using joint torque sensors and a simple joint model. The method does not require joint position or velocity feedback for stabilization. The proposed control method is stable in the sense of Lyaponov. The control method was implemented on several joints of a laboratory robot. The controller showed good stability robustness to system parameter error and to the exclusion of nonlinear dynamic effects on the joints. The controller enhanced position tracking performance and, in the absence of position control, dissipated joint energy.
A Framework for Modeling Human-Machine Interactions
NASA Technical Reports Server (NTRS)
Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)
1996-01-01
Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.
Robust preview control for a class of uncertain discrete-time systems with time-varying delay.
Li, Li; Liao, Fucheng
2018-02-01
This paper proposes a concept of robust preview tracking control for uncertain discrete-time systems with time-varying delay. Firstly, a model transformation is employed for an uncertain discrete system with time-varying delay. Then, the auxiliary variables related to the system state and input are introduced to derive an augmented error system that includes future information on the reference signal. This leads to the tracking problem being transformed into a regulator problem. Finally, for the augmented error system, a sufficient condition of asymptotic stability is derived and the preview controller design method is proposed based on the scaled small gain theorem and linear matrix inequality (LMI) technique. The method proposed in this paper not only solves the difficulty problem of applying the difference operator to the time-varying matrices but also simplifies the structure of the augmented error system. The numerical simulation example also illustrates the effectiveness of the results presented in the paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Krimmel, R.M.
1999-01-01
Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.
Research on effects of phase error in phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Wang, Hongjun; Wang, Zhao; Zhao, Hong; Tian, Ailing; Liu, Bingcai
2007-12-01
Referring to phase-shifting interferometry technology, the phase shifting error from the phase shifter is the main factor that directly affects the measurement accuracy of the phase shifting interferometer. In this paper, the resources and sorts of phase shifting error were introduction, and some methods to eliminate errors were mentioned. Based on the theory of phase shifting interferometry, the effects of phase shifting error were analyzed in detail. The Liquid Crystal Display (LCD) as a new shifter has advantage as that the phase shifting can be controlled digitally without any mechanical moving and rotating element. By changing coded image displayed on LCD, the phase shifting in measuring system was induced. LCD's phase modulation characteristic was analyzed in theory and tested. Based on Fourier transform, the effect model of phase error coming from LCD was established in four-step phase shifting interferometry. And the error range was obtained. In order to reduce error, a new error compensation algorithm was put forward. With this method, the error can be obtained by process interferogram. The interferogram can be compensated, and the measurement results can be obtained by four-step phase shifting interferogram. Theoretical analysis and simulation results demonstrate the feasibility of this approach to improve measurement accuracy.
New-Sum: A Novel Online ABFT Scheme For General Iterative Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram
Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less
Verb Errors of Bilingual and Monolingual Basic Writers
ERIC Educational Resources Information Center
Griswold, Olga
2017-01-01
This study analyzed the grammatical control of verbs exercised by 145 monolingual English and Generation 1.5 bilingual developmental writers in narrative essays using quantitative and qualitative methods. Generation 1.5 students made more errors than their monolingual peers in each category investigated, albeit in only 2 categories was the…
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
NASA Astrophysics Data System (ADS)
Lin, Tsung-Chih
2010-12-01
In this paper, a novel direct adaptive interval type-2 fuzzy-neural tracking control equipped with sliding mode and Lyapunov synthesis approach is proposed to handle the training data corrupted by noise or rule uncertainties for nonlinear SISO nonlinear systems involving external disturbances. By employing adaptive fuzzy-neural control theory, the update laws will be derived for approximating the uncertain nonlinear dynamical system. In the meantime, the sliding mode control method and the Lyapunov stability criterion are incorporated into the adaptive fuzzy-neural control scheme such that the derived controller is robust with respect to unmodeled dynamics, external disturbance and approximation errors. In comparison with conventional methods, the advocated approach not only guarantees closed-loop stability but also the output tracking error of the overall system will converge to zero asymptotically without prior knowledge on the upper bound of the lumped uncertainty. Furthermore, chattering effect of the control input will be substantially reduced by the proposed technique. To illustrate the performance of the proposed method, finally simulation example will be given.
Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method
NASA Astrophysics Data System (ADS)
Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.
2018-01-01
Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.
How good are the internal controls in your group practice? Ten questions to contemplate.
Grant, Barbara J; Foley, Lori A
2002-01-01
Internal controls are the methods and procedures used by any business to prevent or detect errors, safeguard assets (especially cash) from being misappropriated, and encourage staff adherence to prescribed managerial policies. Internal controls in a medical practice differ depending on the size and complexity of the practice. The key, however, is that they prevent or detect errors and efforts to circumvent the established policies and procedures of the organization. How good are the internal controls in your group practice? This article identifies ten questions you should use to evaluate your risk of asset misappropriation.
West, Jamie; Atherton, Jennifer; Costelloe, Seán J; Pourmahram, Ghazaleh; Stretton, Adam; Cornes, Michael
2017-01-01
Preanalytical errors have previously been shown to contribute a significant proportion of errors in laboratory processes and contribute to a number of patient safety risks. Accreditation against ISO 15189:2012 requires that laboratory Quality Management Systems consider the impact of preanalytical processes in areas such as the identification and control of non-conformances, continual improvement, internal audit and quality indicators. Previous studies have shown that there is a wide variation in the definition, repertoire and collection methods for preanalytical quality indicators. The International Federation of Clinical Chemistry Working Group on Laboratory Errors and Patient Safety has defined a number of quality indicators for the preanalytical stage, and the adoption of harmonized definitions will support interlaboratory comparisons and continual improvement. There are a variety of data collection methods, including audit, manual recording processes, incident reporting mechanisms and laboratory information systems. Quality management processes such as benchmarking, statistical process control, Pareto analysis and failure mode and effect analysis can be used to review data and should be incorporated into clinical governance mechanisms. In this paper, The Association for Clinical Biochemistry and Laboratory Medicine PreAnalytical Specialist Interest Group review the various data collection methods available. Our recommendation is the use of the laboratory information management systems as a recording mechanism for preanalytical errors as this provides the easiest and most standardized mechanism of data capture.
NASA Astrophysics Data System (ADS)
Debreu, Laurent; Neveu, Emilie; Simon, Ehouarn; Le Dimet, Francois Xavier; Vidard, Arthur
2014-05-01
In order to lower the computational cost of the variational data assimilation process, we investigate the use of multigrid methods to solve the associated optimal control system. On a linear advection equation, we study the impact of the regularization term on the optimal control and the impact of discretization errors on the efficiency of the coarse grid correction step. We show that even if the optimal control problem leads to the solution of an elliptic system, numerical errors introduced by the discretization can alter the success of the multigrid methods. The view of the multigrid iteration as a preconditioner for a Krylov optimization method leads to a more robust algorithm. A scale dependent weighting of the multigrid preconditioner and the usual background error covariance matrix based preconditioner is proposed and brings significant improvements. [1] Laurent Debreu, Emilie Neveu, Ehouarn Simon, François-Xavier Le Dimet and Arthur Vidard, 2014: Multigrid solvers and multigrid preconditioners for the solution of variational data assimilation problems, submitted to QJRMS, http://hal.inria.fr/hal-00874643 [2] Emilie Neveu, Laurent Debreu and François-Xavier Le Dimet, 2011: Multigrid methods and data assimilation - Convergence study and first experiments on non-linear equations, ARIMA, 14, 63-80, http://intranet.inria.fr/international/arima/014/014005.html
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, Charles M., E-mail: cable@wfubmc.edu; Bright, Megan; Frizzell, Bart
Purpose: Statistical process control (SPC) is a quality control method used to ensure that a process is well controlled and operates with little variation. This study determined whether SPC was a viable technique for evaluating the proper operation of a high-dose-rate (HDR) brachytherapy treatment delivery system. Methods and Materials: A surrogate prostate patient was developed using Vyse ordnance gelatin. A total of 10 metal oxide semiconductor field-effect transistors (MOSFETs) were placed from prostate base to apex. Computed tomography guidance was used to accurately position the first detector in each train at the base. The plan consisted of 12 needles withmore » 129 dwell positions delivering a prescribed peripheral dose of 200 cGy. Sixteen accurate treatment trials were delivered as planned. Subsequently, a number of treatments were delivered with errors introduced, including wrong patient, wrong source calibration, wrong connection sequence, single needle displaced inferiorly 5 mm, and entire implant displaced 2 mm and 4 mm inferiorly. Two process behavior charts (PBC), an individual and a moving range chart, were developed for each dosimeter location. Results: There were 4 false positives resulting from 160 measurements from 16 accurately delivered treatments. For the inaccurately delivered treatments, the PBC indicated that measurements made at the periphery and apex (regions of high-dose gradient) were much more sensitive to treatment delivery errors. All errors introduced were correctly identified by either the individual or the moving range PBC in the apex region. Measurements at the urethra and base were less sensitive to errors. Conclusions: SPC is a viable method for assessing the quality of HDR treatment delivery. Further development is necessary to determine the most effective dose sampling, to ensure reproducible evaluation of treatment delivery accuracy.« less
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom
2016-01-01
Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Fully Mechanically Controlled Automated Electron Microscopic Tomography
Liu, Jinxin; Li, Hongchang; Zhang, Lei; ...
2016-07-11
Knowledge of three-dimensional (3D) structures of each individual particles of asymmetric and flexible proteins is essential in understanding those proteins' functions; but their structures are difficult to determine. Electron tomography (ET) provides a tool for imaging a single and unique biological object from a series of tilted angles, but it is challenging to image a single protein for three-dimensional (3D) reconstruction due to the imperfect mechanical control capability of the specimen goniometer under both a medium to high magnification (approximately 50,000-160,000×) and an optimized beam coherence condition. Here, we report a fully mechanical control method for automating ET data acquisitionmore » without using beam tilt/shift processes. This method could reduce the accumulation of beam tilt/shift that used to compensate the error from the mechanical control, but downgraded the beam coherence. Our method was developed by minimizing the error of the target object center during the tilting process through a closed-loop proportional-integral (PI) control algorithm. The validations by both negative staining (NS) and cryo-electron microscopy (cryo-EM) suggest that this method has a comparable capability to other ET methods in tracking target proteins while maintaining optimized beam coherence conditions for imaging.« less
Lee, Wonseok; Bae, Hyoung Won; Lee, Si Hyung; Kim, Chan Yun
2017-01-01
Purpose To assess the accuracy of intraocular lens (IOL) power prediction for cataract surgery with open angle glaucoma (OAG) and to identify preoperative angle parameters correlated with postoperative unpredicted refractive errors. Materials and Methods This study comprised 45 eyes from 45 OAG subjects and 63 eyes from 63 non-glaucomatous cataract subjects (controls). We investigated differences in preoperative predicted refractive errors and postoperative refractive errors for each group. Preoperative predicted refractive errors were obtained by biometry (IOL-master) and compared to postoperative refractive errors measured by auto-refractometer 2 months postoperatively. Anterior angle parameters were determined using swept source optical coherence tomography. We investigated correlations between preoperative angle parameters [angle open distance (AOD); trabecular iris surface area (TISA); angle recess area (ARA); trabecular iris angle (TIA)] and postoperative unpredicted refractive errors. Results In patients with OAG, significant differences were noted between preoperative predicted and postoperative real refractive errors, with more myopia than predicted. No significant differences were recorded in controls. Angle parameters (AOD, ARA, TISA, and TIA) at the superior and inferior quadrant were significantly correlated with differences between predicted and postoperative refractive errors in OAG patients (-0.321 to -0.408, p<0.05). Superior quadrant AOD 500 was significantly correlated with postoperative refractive differences in multivariate linear regression analysis (β=-2.925, R2=0.404). Conclusion Clinically unpredicted refractive errors after cataract surgery were more common in OAG than in controls. Certain preoperative angle parameters, especially AOD 500 at the superior quadrant, were significantly correlated with these unpredicted errors. PMID:28120576
NASA Astrophysics Data System (ADS)
Frasson, Renato Prata de Moraes; Wei, Rui; Durand, Michael; Minear, J. Toby; Domeneghetti, Alessio; Schumann, Guy; Williams, Brent A.; Rodriguez, Ernesto; Picamilh, Christophe; Lion, Christine; Pavelsky, Tamlin; Garambois, Pierre-André
2017-10-01
The upcoming Surface Water and Ocean Topography (SWOT) mission will measure water surface heights and widths for rivers wider than 100 m. At its native resolution, SWOT height errors are expected to be on the order of meters, which prevent the calculation of water surface slopes and the use of slope-dependent discharge equations. To mitigate height and width errors, the high-resolution measurements will be grouped into reaches (˜5 to 15 km), where slope and discharge are estimated. We describe three automated river segmentation strategies for defining optimum reaches for discharge estimation: (1) arbitrary lengths, (2) identification of hydraulic controls, and (3) sinuosity. We test our methodologies on 9 and 14 simulated SWOT overpasses over the Sacramento and the Po Rivers, respectively, which we compare against hydraulic models of each river. Our results show that generally, height, width, and slope errors decrease with increasing reach length. However, the hydraulic controls and the sinuosity methods led to better slopes and often height errors that were either smaller or comparable to those of arbitrary reaches of compatible sizes. Estimated discharge errors caused by the propagation of height, width, and slope errors through the discharge equation were often smaller for sinuosity (on average 8.5% for the Sacramento and 6.9% for the Po) and hydraulic control (Sacramento: 7.3% and Po: 5.9%) reaches than for arbitrary reaches of comparable lengths (Sacramento: 8.6% and Po: 7.8%). This analysis suggests that reach definition methods that preserve the hydraulic properties of the river network may lead to better discharge estimates.
Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach
NASA Astrophysics Data System (ADS)
Bähr, Hermann; Hanssen, Ramon F.
2012-12-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.
Multi-criteria decision making approaches for quality control of genome-wide association studies.
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-03-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter's preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter's choices thus providing more reproducible results.
Focus control enhancement and on-product focus response analysis methodology
NASA Astrophysics Data System (ADS)
Kim, Young Ki; Chen, Yen-Jen; Hao, Xueli; Samudrala, Pavan; Gomez, Juan-Manuel; Mahoney, Mark O.; Kamalizadeh, Ferhad; Hanson, Justin K.; Lee, Shawn; Tian, Ye
2016-03-01
With decreasing CDOF (Critical Depth Of Focus) for 20/14nm technology and beyond, focus errors are becoming increasingly critical for on-product performance. Current on product focus control techniques in high volume manufacturing are limited; It is difficult to define measurable focus error and optimize focus response on product with existing methods due to lack of credible focus measurement methodologies. Next to developments in imaging and focus control capability of scanners and general tool stability maintenance, on-product focus control improvements are also required to meet on-product imaging specifications. In this paper, we discuss focus monitoring, wafer (edge) fingerprint correction and on-product focus budget analysis through diffraction based focus (DBF) measurement methodology. Several examples will be presented showing better focus response and control on product wafers. Also, a method will be discussed for a focus interlock automation system on product for a high volume manufacturing (HVM) environment.
Fuzzy Adaptive Output Feedback Control of Uncertain Nonlinear Systems With Prescribed Performance.
Zhang, Jin-Xi; Yang, Guang-Hong
2018-05-01
This paper investigates the tracking control problem for a family of strict-feedback systems in the presence of unknown nonlinearities and immeasurable system states. A low-complexity adaptive fuzzy output feedback control scheme is proposed, based on a backstepping method. In the control design, a fuzzy adaptive state observer is first employed to estimate the unmeasured states. Then, a novel error transformation approach together with a new modification mechanism is introduced to guarantee the finite-time convergence of the output error to a predefined region and ensure the closed-loop stability. Compared with the existing methods, the main advantages of our approach are that: 1) without using extra command filters or auxiliary dynamic surface control techniques, the problem of explosion of complexity can still be addressed and 2) the design procedures are independent of the initial conditions. Finally, two practical examples are performed to further illustrate the above theoretic findings.
Living systematic reviews: 3. Statistical methods for updating meta-analyses.
Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian
2017-11-01
A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.
The role of strategies in motor learning
Taylor, Jordan A.; Ivry, Richard B.
2015-01-01
There has been renewed interest in the role of strategies in sensorimotor learning. The combination of new behavioral methods and computational methods has begun to unravel the interaction between processes related to strategic control and processes related to motor adaptation. These processes may operate on very different error signals. Strategy learning is sensitive to goal-based performance error. In contrast, adaptation is sensitive to prediction errors between the desired and actual consequences of a planned movement. The former guides what the desired movement should be, whereas the latter guides how to implement the desired movement. Whereas traditional approaches have favored serial models in which an initial strategy-based phase gives way to more automatized forms of control, it now seems that strategic and adaptive processes operate with considerable independence throughout learning, although the relative weight given the two processes will shift with changes in performance. As such, skill acquisition involves the synergistic engagement of strategic and adaptive processes. PMID:22329960
NASA Technical Reports Server (NTRS)
Wilson, John W. (Inventor); Tripathi, Ram K. (Inventor); Cucinotta, Francis A. (Inventor); Badavi, Francis F. (Inventor)
2012-01-01
An apparatus, method and program storage device for determining high-energy neutron/ion transport to a target of interest. Boundaries are defined for calculation of a high-energy neutron/ion transport to a target of interest; the high-energy neutron/ion transport to the target of interest is calculated using numerical procedures selected to reduce local truncation error by including higher order terms and to allow absolute control of propagated error by ensuring truncation error is third order in step size, and using scaling procedures for flux coupling terms modified to improve computed results by adding a scaling factor to terms describing production of j-particles from collisions of k-particles; and the calculated high-energy neutron/ion transport is provided to modeling modules to control an effective radiation dose at the target of interest.
Statistical process control methods allow the analysis and improvement of anesthesia care.
Fasting, Sigurd; Gisvold, Sven E
2003-10-01
Quality aspects of the anesthetic process are reflected in the rate of intraoperative adverse events. The purpose of this report is to illustrate how the quality of the anesthesia process can be analyzed using statistical process control methods, and exemplify how this analysis can be used for quality improvement. We prospectively recorded anesthesia-related data from all anesthetics for five years. The data included intraoperative adverse events, which were graded into four levels, according to severity. We selected four adverse events, representing important quality and safety aspects, for statistical process control analysis. These were: inadequate regional anesthesia, difficult emergence from general anesthesia, intubation difficulties and drug errors. We analyzed the underlying process using 'p-charts' for statistical process control. In 65,170 anesthetics we recorded adverse events in 18.3%; mostly of lesser severity. Control charts were used to define statistically the predictable normal variation in problem rate, and then used as a basis for analysis of the selected problems with the following results: Inadequate plexus anesthesia: stable process, but unacceptably high failure rate; Difficult emergence: unstable process, because of quality improvement efforts; Intubation difficulties: stable process, rate acceptable; Medication errors: methodology not suited because of low rate of errors. By applying statistical process control methods to the analysis of adverse events, we have exemplified how this allows us to determine if a process is stable, whether an intervention is required, and if quality improvement efforts have the desired effect.
Intelligent voltage control strategy for three-phase UPS inverters with output LC filter
NASA Astrophysics Data System (ADS)
Jung, J. W.; Leu, V. Q.; Dang, D. Q.; Do, T. D.; Mwasilu, F.; Choi, H. H.
2015-08-01
This paper presents a supervisory fuzzy neural network control (SFNNC) method for a three-phase inverter of uninterruptible power supplies (UPSs). The proposed voltage controller is comprised of a fuzzy neural network control (FNNC) term and a supervisory control term. The FNNC term is deliberately employed to estimate the uncertain terms, and the supervisory control term is designed based on the sliding mode technique to stabilise the system dynamic errors. To improve the learning capability, the FNNC term incorporates an online parameter training methodology, using the gradient descent method and Lyapunov stability theory. Besides, a linear load current observer that estimates the load currents is used to exclude the load current sensors. The proposed SFNN controller and the observer are robust to the filter inductance variations, and their stability analyses are described in detail. The experimental results obtained on a prototype UPS test bed with a TMS320F28335 DSP are presented to validate the feasibility of the proposed scheme. Verification results demonstrate that the proposed control strategy can achieve smaller steady-state error and lower total harmonic distortion when subjected to nonlinear or unbalanced loads compared to the conventional sliding mode control method.
Real-time line-width measurements: a new feature for reticle inspection systems
NASA Astrophysics Data System (ADS)
Eran, Yair; Greenberg, Gad; Joseph, Amnon; Lustig, Cornel; Mizrahi, Eyal
1997-07-01
The significance of line width control in mask production has become greater with the lessening of defect size. There are two conventional methods used for controlling line widths dimensions which employed in the manufacturing of masks for sub micron devices. These two methods are the critical dimensions (CD) measurement and the detection of edge defects. Achieving reliable and accurate control of line width errors is one of the most challenging tasks in mask production. Neither of the two methods cited above (namely CD measurement and the detection of edge defects) guarantees the detection of line width errors with good sensitivity over the whole mask area. This stems from the fact that CD measurement provides only statistical data on the mask features whereas applying edge defect detection method checks defects on each edge by itself, and does not supply information on the combined result of error detection on two adjacent edges. For example, a combination of a small edge defect together with a CD non- uniformity which are both within the allowed tolerance, may yield a significant line width error, which will not be detected using the conventional methods (see figure 1). A new approach for the detection of line width errors which overcomes this difficulty is presented. Based on this approach, a new sensitive line width error detector was developed and added to Orbot's RT-8000 die-to-database reticle inspection system. This innovative detector operates continuously during the mask inspection process and scans (inspects) the entire area of the reticle for line width errors. The detection is based on a comparison of measured line width that are taken on both the design database and the scanned image of the reticle. In section 2, the motivation for developing this new detector is presented. The section covers an analysis of various defect types, which are difficult to detect using conventional edge detection methods or, alternatively, CD measurements. In section 3, the basic concept of the new approach is introduced together with a description of the new detector and its characteristics. In section 4, the calibration process that took place in order to achieve reliable and repeatable line width measurements is presented. The description of an experiments conducted in order to evaluate the sensitivity of the new detector is given in section 5, followed by a report of the results of this evaluation. The conclusions are presented in section 6.
Hospital prescribing errors: epidemiological assessment of predictors
Fijn, R; Van den Bemt, P M L A; Chow, M; De Blaey, C J; De Jong-Van den Berg, L T W; Brouwers, J R B J
2002-01-01
Aims To demonstrate an epidemiological method to assess predictors of prescribing errors. Methods A retrospective case-control study, comparing prescriptions with and without errors. Results Only prescriber and drug characteristics were associated with errors. Prescriber characteristics were medical specialty (e.g. orthopaedics: OR: 3.4, 95% CI 2.1, 5.4) and prescriber status (e.g. verbal orders transcribed by nursing staff: OR: 2.5, 95% CI 1.8, 3.6). Drug characteristics were dosage form (e.g. inhalation devices: OR: 4.1, 95% CI 2.6, 6.6), therapeutic area (e.g. gastrointestinal tract: OR: 1.7, 95% CI 1.2, 2.4) and continuation of preadmission treatment (Yes: OR: 1.7, 95% CI 1.3, 2.3). Conclusions Other hospitals could use our epidemiological framework to identify their own error predictors. Our findings suggest a focus on specific prescribers, dosage forms and therapeutic areas. We also found that prescriptions originating from general practitioners involved errors and therefore, these should be checked when patients are hospitalized. PMID:11874397
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Stevens, Allen D.; Hernandez, Caleb; Jones, Seth; Moreira, Maria E.; Blumen, Jason R.; Hopkins, Emily; Sande, Margaret; Bakes, Katherine; Haukoos, Jason S.
2016-01-01
Background Medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients where dosing often requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national healthcare priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared to conventional medication administration, in simulated prehospital pediatric resuscitation scenarios. Methods We performed a prospective, block-randomized, cross-over study, where 10 full-time paramedics each managed two simulated pediatric arrests in situ using either prefilled, color-coded-syringes (intervention) or their own medication kits stocked with conventional ampoules (control). Each paramedic was paired with two emergency medical technicians to provide ventilations and compressions as directed. The ambulance patient compartment and the intravenous medication port were video recorded. Data were extracted from video review by blinded, independent reviewers. Results Median time to delivery of all doses for the intervention and control groups was 34 (95% CI: 28–39) seconds and 42 (95% CI: 36–51) seconds, respectively (difference = 9 [95% CI: 4–14] seconds). Using the conventional method, 62 doses were administered with 24 (39%) critical dosing errors; using the prefilled, color-coded syringe method, 59 doses were administered with 0 (0%) critical dosing errors (difference = 39%, 95% CI: 13–61%). Conclusions A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by paramedics during simulated prehospital pediatric resuscitations. PMID:26247145
NASA Technical Reports Server (NTRS)
Stepner, D. E.; Mehra, R. K.
1973-01-01
A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.
Methods as Tools: A Response to O'Keefe.
ERIC Educational Resources Information Center
Hewes, Dean E.
2003-01-01
Tries to distinguish the key insights from some distortions by clarifying the goals of experiment-wise error control that D. O'Keefe correctly identifies as vague and open to misuse. Concludes that a better understanding of the goal of experiment-wise error correction erases many of these "absurdities," but the clarifications necessary…
The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Comparison of survey and photogrammetry methods to position gravity data, Yucca Mountain, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponce, D.A.; Wu, S.S.C.; Spielman, J.B.
1985-12-31
Locations of gravity stations at Yucca Mountain, Nevada, were determined by a survey using an electronic distance-measuring device and by a photogram-metric method. The data from both methods were compared to determine if horizontal and vertical coordinates developed from photogrammetry are sufficently accurate to position gravity data at the site. The results show that elevations from the photogrammetric data have a mean difference of 0.57 +- 0.70 m when compared with those of the surveyed data. Comparison of the horizontal control shows that the two methods agreed to within 0.01 minute. At a latitude of 45{sup 0}, an error ofmore » 0.01 minute (18 m) corresponds to a gravity anomaly error of 0.015 mGal. Bouguer gravity anomalies are most sensitive to errors in elevation, thus elevation is the determining factor for use of photogrammetric or survey methods to position gravity data. Because gravity station positions are difficult to locate on aerial photographs, photogrammetric positions are not always exactly at the gravity station; therefore, large disagreements may appear when comparing electronic and photogrammetric measurements. A mean photogrammetric elevation error of 0.57 m corresponds to a gravity anomaly error of 0.11 mGal. Errors of 0.11 mGal are too large for high-precision or detailed gravity measurements but acceptable for regional work. 1 ref. 2 figs., 4 tabs.« less
Adaptive control strategies for flexible robotic arm
NASA Technical Reports Server (NTRS)
Bialasiewicz, Jan T.
1993-01-01
The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity if not unstable closed-loop behavior. Therefore a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.
Experimental magic state distillation for fault-tolerant quantum computing.
Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond
2011-01-25
Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.
NASA Astrophysics Data System (ADS)
Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza
2016-06-01
This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.
Residents' numeric inputting error in computerized physician order entry prescription.
Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong
2016-04-01
Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Technique for positioning hologram for balancing large data capacity with fast readout
NASA Astrophysics Data System (ADS)
Shimada, Ken-ichi; Hosaka, Makoto; Yamazaki, Kazuyoshi; Onoe, Shinsuke; Ide, Tatsuro
2017-09-01
The technical difficulty of balancing large data capacity with a high data transfer rate in holographic data storage systems (HDSSs) is significantly high because of tight tolerances for physical perturbation. From a system margin perspective in terabyte-class HDSSs, the positioning error of a holographic disc should be within about 10 µm to ensure high readout quality. Furthermore, fine control of the positioning should be accomplished within a time frame of about 10 ms for a high data transfer rate of the Gbps class, while a conventional method based on servo control of spindle or sled motors can rarely satisfy the requirement. In this study, a new compensation method for the effect of positioning error, which precisely controls the positioning of a Nyquist aperture instead of a holographic disc, has been developed. The method relaxes the markedly low positional tolerance of a holographic disc. Moreover, owing to the markedly light weight of the aperture, positioning control within the required time frame becomes feasible.
Alternative Attitude Commanding and Control for Precise Spacecraft Landing
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2004-01-01
A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.
Effect of accuracy of wind power prediction on power system operator
NASA Technical Reports Server (NTRS)
Schlueter, R. A.; Sigari, G.; Costi, T.
1985-01-01
This research project proposed a modified unit commitment that schedules connection and disconnection of generating units in response to load. A modified generation control is also proposed that controls steam units under automatic generation control, fast responding diesels, gas turbines and hydro units under a feedforward control, and wind turbine array output under a closed loop array control. This modified generation control and unit commitment require prediction of trend wind power variation one hour ahead and the prediction of error in this trend wind power prediction one half hour ahead. An improved meter for predicting trend wind speed variation is developed. Methods for accurately simulating the wind array power from a limited number of wind speed prediction records was developed. Finally, two methods for predicting the error in the trend wind power prediction were developed. This research provides a foundation for testing and evaluating the modified unit commitment and generation control that was developed to maintain operating reliability at a greatly reduced overall production cost for utilities with wind generation capacity.
Control and synchronisation of a novel seven-dimensional hyperchaotic system with active control
NASA Astrophysics Data System (ADS)
Varan, Metin; Akgul, Akif
2018-04-01
In this work, active control method is proposed for controlling and synchronising seven-dimensional (7D) hyperchaotic systems. The seven-dimensional hyperchaotic system is considered for the implementation. Seven-dimensional hyperchaotic system is also investigated via time series, phase portraits and bifurcation diagrams. For understanding the impact of active controllers on global asymptotic stability of synchronisation and control errors, the Lyapunov function is used. Numerical analysis is done to reveal the effectiveness of applied active control method and the results are discussed.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Terminal iterative learning control based station stop control of a train
NASA Astrophysics Data System (ADS)
Hou, Zhongsheng; Wang, Yi; Yin, Chenkun; Tang, Tao
2011-07-01
The terminal iterative learning control (TILC) method is introduced for the first time into the field of train station stop control and three TILC-based algorithms are proposed in this study. The TILC-based train station stop control approach utilises the terminal stop position error in previous braking process to update the current control profile. The initial braking position, or the braking force, or their combination is chosen as the control input, and corresponding learning law is developed. The terminal stop position error of each algorithm is guaranteed to converge to a small region related with the initial offset of braking position with rigorous analysis. The validity of the proposed algorithms is verified by illustrative numerical examples.
Coordinated joint motion control system with position error correction
Danko, George [Reno, NV
2011-11-22
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Coordinated joint motion control system with position error correction
Danko, George L.
2016-04-05
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Hovakimyan, N; Nardi, F; Calise, A; Kim, Nakwan
2002-01-01
We consider adaptive output feedback control of uncertain nonlinear systems, in which both the dynamics and the dimension of the regulated system may be unknown. However, the relative degree of the regulated output is assumed to be known. Given a smooth reference trajectory, the problem is to design a controller that forces the system measurement to track it with bounded errors. The classical approach requires a state observer. Finding a good observer for an uncertain nonlinear system is not an obvious task. We argue that it is sufficient to build an observer for the output tracking error. Ultimate boundedness of the error signals is shown through Lyapunov's direct method. The theoretical results are illustrated in the design of a controller for a fourth-order nonlinear system of relative degree two and a high-bandwidth attitude command system for a model R-50 helicopter.
Non-Static error tracking control for near space airship loading platform
NASA Astrophysics Data System (ADS)
Ni, Ming; Tao, Fei; Yang, Jiandong
2018-01-01
A control scheme based on internal model with non-static error is presented against the uncertainty of the near space airship loading platform system. The uncertainty in the tracking table is represented as interval variations in stability and control derivatives. By formulating the tracking problem of the uncertainty system as a robust state feedback stabilization problem of an augmented system, sufficient condition for the existence of robust tracking controller is derived in the form of linear matrix inequality (LMI). Finally, simulation results show that the new method not only has better anti-jamming performance, but also improves the dynamic performance of the high-order systems.
Development of a Work Control System for Propulsion Testing at NASA Stennis
NASA Technical Reports Server (NTRS)
Messer, Elizabeth A.
2005-01-01
This paper will explain the requirements and steps taken to develop the current Propulsion Test Directorate electronic work control system for Test Operations. The PTD Work Control System includes work authorization and technical instruction documents, such as test preparation sheets, discrepancy reports, test requests, pre-test briefing reports, and other test operations supporting tools. The environment that existed in the E-Complex test areas in the late 1990's was one of enormous growth which brought people of diverse backgrounds together for the sole purpose of testing propulsion hardware. The problem that faced us was that these newly formed teams did not have a consistent and clearly understood method for writing, performing or verifying work. A paper system was developed that would allow the teams to use the same forms, but this still presented problems in the large amount of errors occurring, such as lost paperwork and inconsistent implementation. In a sampling of errors in August 1999, the paper work control system encountered 250 errors out of 230 documents released and completed, for an error rate of 111%.
Method for controlling a vehicle with two or more independently steered wheels
Reister, D.B.; Unseren, M.A.
1995-03-28
A method is described for independently controlling each steerable drive wheel of a vehicle with two or more such wheels. An instantaneous center of rotation target and a tangential velocity target are inputs to a wheel target system which sends the velocity target and a steering angle target for each drive wheel to a pseudo-velocity target system. The pseudo-velocity target system determines a pseudo-velocity target which is compared to a current pseudo-velocity to determine a pseudo-velocity error. The steering angle targets and the steering angles are inputs to a steering angle control system which outputs to the steering angle encoders, which measure the steering angles. The pseudo-velocity error, the rate of change of the pseudo-velocity error, and the wheel slip between each pair of drive wheels are used to calculate intermediate control variables which, along with the steering angle targets are used to calculate the torque to be applied at each wheel. The current distance traveled for each wheel is then calculated. The current wheel velocities and steering angle targets are used to calculate the cumulative and instantaneous wheel slip and the current pseudo-velocity. 6 figures.
Active Control of Inlet Noise on the JT15D Turbofan Engine
NASA Technical Reports Server (NTRS)
Smith, Jerome P.; Hutcheson, Florence V.; Burdisso, Ricardo A.; Fuller, Chris R.
1999-01-01
This report presents the key results obtained by the Vibration and Acoustics Laboratories at Virginia Tech over the year from November 1997 to December 1998 on the Active Noise Control of Turbofan Engines research project funded by NASA Langley Research Center. The concept of implementing active noise control techniques with fuselage-mounted error sensors is investigated both analytically and experimentally. The analytical part of the project involves the continued development of an advanced modeling technique to provide prediction and design guidelines for application of active noise control techniques to large, realistic high bypass engines of the type on which active control methods are expected to be applied. Results from the advanced analytical model are presented that show the effectiveness of the control strategies, and the analytical results presented for fuselage error sensors show good agreement with the experimentally observed results and provide additional insight into the control phenomena. Additional analytical results are presented for active noise control used in conjunction with a wavenumber sensing technique. The experimental work is carried out on a running JT15D turbofan jet engine in a test stand at Virginia Tech. The control strategy used in these tests was the feedforward Filtered-X LMS algorithm. The control inputs were supplied by single and multiple circumferential arrays of acoustic sources equipped with neodymium iron cobalt magnets mounted upstream of the fan. The reference signal was obtained from an inlet mounted eddy current probe. The error signals were obtained from a number of pressure transducers flush-mounted in a simulated fuselage section mounted in the engine test cell. The active control methods are investigated when implemented with the control sources embedded within the acoustically absorptive material on a passively-lined inlet. The experimental results show that the combination of active control techniques with fuselage-mounted error sensors and passive control techniques is an effective means of reducing radiated noise from turbofan engines. Strategic selection of the location of the error transducers is shown to be effective for reducing the radiation towards particular directions in the farfield. An analytical model is used to predict the behavior of the control system and to guide the experimental design configurations, and the analytical results presented show good agreement with the experimentally observed results.
NASA Astrophysics Data System (ADS)
James, Mike R.; Robson, Stuart; d'Oleire-Oltmanns, Sebastian; Niethammer, Uwe
2016-04-01
Structure-from-motion (SfM) algorithms are greatly facilitating the production of detailed topographic models based on images collected by unmanned aerial vehicles (UAVs). However, SfM-based software does not generally provide the rigorous photogrammetric analysis required to fully understand survey quality. Consequently, error related to problems in control point data or the distribution of control points can remain undiscovered. Even if these errors are not large in magnitude, they can be systematic, and thus have strong implications for the use of products such as digital elevation models (DEMs) and orthophotos. Here, we develop a Monte Carlo approach to (1) improve the accuracy of products when SfM-based processing is used and (2) reduce the associated field effort by identifying suitable lower density deployments of ground control points. The method highlights over-parameterisation during camera self-calibration and provides enhanced insight into control point performance when rigorous error metrics are not available. Processing was implemented using commonly-used SfM-based software (Agisoft PhotoScan), which we augment with semi-automated and automated GCPs image measurement. We apply the Monte Carlo method to two contrasting case studies - an erosion gully survey (Taurodont, Morocco) carried out with an fixed-wing UAV, and an active landslide survey (Super-Sauze, France), acquired using a manually controlled quadcopter. The results highlight the differences in the control requirements for the two sites, and we explore the implications for future surveys. We illustrate DEM sensitivity to critical processing parameters and show how the use of appropriate parameter values increases DEM repeatability and reduces the spatial variability of error due to processing artefacts.
Historical shoreline mapping (I): improving techniques and reducing positioning errors
Thieler, E. Robert; Danforth, William W.
1994-01-01
A critical need exists among coastal researchers and policy-makers for a precise method to obtain shoreline positions from historical maps and aerial photographs. A number of methods that vary widely in approach and accuracy have been developed to meet this need. None of the existing methods, however, address the entire range of cartographic and photogrammetric techniques required for accurate coastal mapping. Thus, their application to many typical shoreline mapping problems is limited. In addition, no shoreline mapping technique provides an adequate basis for quantifying the many errors inherent in shoreline mapping using maps and air photos. As a result, current assessments of errors in air photo mapping techniques generally (and falsely) assume that errors in shoreline positions are represented by the sum of a series of worst-case assumptions about digitizer operator resolution and ground control accuracy. These assessments also ignore altogether other errors that commonly approach ground distances of 10 m. This paper provides a conceptual and analytical framework for improved methods of extracting geographic data from maps and aerial photographs. We also present a new approach to shoreline mapping using air photos that revises and extends a number of photogrammetric techniques. These techniques include (1) developing spatially and temporally overlapping control networks for large groups of photos; (2) digitizing air photos for use in shoreline mapping; (3) preprocessing digitized photos to remove lens distortion and film deformation effects; (4) simultaneous aerotriangulation of large groups of spatially and temporally overlapping photos; and (5) using a single-ray intersection technique to determine geographic shoreline coordinates and express the horizontal and vertical error associated with a given digitized shoreline. As long as historical maps and air photos are used in studies of shoreline change, there will be a considerable amount of error (on the order of several meters) present in shoreline position and rate-of- change calculations. The techniques presented in this paper, however, provide a means to reduce and quantify these errors so that realistic assessments of the technological noise (as opposed to geological noise) in geographic shoreline positions can be made.
Learning to Fail in Aphasia: An Investigation of Error Learning in Naming
Middleton, Erica L.; Schwartz, Myrna F.
2013-01-01
Purpose To determine if the naming impairment in aphasia is influenced by error learning and if error learning is related to type of retrieval strategy. Method Nine participants with aphasia and ten neurologically-intact controls named familiar proper noun concepts. When experiencing tip-of-the-tongue naming failure (TOT) in an initial TOT-elicitation phase, participants were instructed to adopt phonological or semantic self-cued retrieval strategies. In the error learning manipulation, items evoking TOT states during TOT-elicitation were randomly assigned to a short or long time condition where participants were encouraged to continue to try to retrieve the name for either 20 seconds (short interval) or 60 seconds (long). The incidence of TOT on the same items was measured on a post test after 48-hours. Error learning was defined as a higher rate of recurrent TOTs (TOT at both TOT-elicitation and post test) for items assigned to the long (versus short) time condition. Results In the phonological condition, participants with aphasia showed error learning whereas controls showed a pattern opposite to error learning. There was no evidence for error learning in the semantic condition for either group. Conclusion Error learning is operative in aphasia, but dependent on the type of strategy employed during naming failure. PMID:23816662
NASA Astrophysics Data System (ADS)
Cole, Matthew O. T.; Shinonawanik, Praween; Wongratanaphisan, Theeraphong
2018-05-01
Structural flexibility can impact negatively on machine motion control systems by causing unmeasured positioning errors and vibration at locations where accurate motion is important for task execution. To compensate for these effects, command signal prefiltering may be applied. In this paper, a new FIR prefilter design method is described that combines finite-time vibration cancellation with dynamic compensation properties. The time-domain formulation exploits the relation between tracking error and the moment values of the prefilter impulse response function. Optimal design solutions for filters having minimum H2 norm are derived and evaluated. The control approach does not require additional actuation or sensing and can be effective even without complete and accurate models of the machine dynamics. Results from implementation and testing on an experimental high-speed manipulator having a Delta robot architecture with directionally compliant end-effector are presented. The results show the importance of prefilter moment values for tracking performance and confirm that the proposed method can achieve significant reductions in both peak and RMS tracking error, as well as settling time, for complex motion patterns.
Design and Validation of an Infrared Badal Optometer for Laser Speckle (IBOLS)
Teel, Danielle F. W.; Copland, R. James; Jacobs, Robert J.; Wells, Thad; Neal, Daniel R.; Thibos, Larry N.
2009-01-01
Purpose To validate the design of an infrared wavefront aberrometer with a Badal optometer employing the principle of laser speckle generated by a spinning disk and infrared light. The instrument was designed for subjective meridional refraction in infrared light by human patients. Methods Validation employed a model eye with known refractive error determined with an objective infrared wavefront aberrometer. The model eye was used to produce a speckle pattern on an artificial retina with controlled amounts of ametropia introduced with auxiliary ophthalmic lenses. A human observer performed the psychophysical task of observing the speckle pattern (with the aid of a video camera sensitive to infrared radiation) formed on the artificial retina. Refraction was performed by adjusting the vergence of incident light with the Badal optometer to nullify the motion of laser speckle. Validation of the method was performed for different levels of spherical ametropia and for various configurations of an astigmatic model eye. Results Subjective measurements of meridional refractive error over the range −4D to + 4D agreed with astigmatic refractive errors predicted by the power of the model eye in the meridian of motion of the spinning disk. Conclusions Use of a Badal optometer to control laser speckle is a valid method for determining subjective refractive error at infrared wavelengths. Such an instrument will be useful for comparing objective measures of refractive error obtained for the human eye with autorefractors and wavefront aberrometers that employ infrared radiation. PMID:18772719
Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.
Cole, Steve W; Galic, Zoran; Zack, Jerome A
2003-09-22
Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus
Airborne data measurement system errors reduction through state estimation and control optimization
NASA Astrophysics Data System (ADS)
Sebryakov, G. G.; Muzhichek, S. M.; Pavlov, V. I.; Ermolin, O. V.; Skrinnikov, A. A.
2018-02-01
The paper discusses the problem of airborne data measurement system errors reduction through state estimation and control optimization. The approaches are proposed based on the methods of experiment design and the theory of systems with random abrupt structure variation. The paper considers various control criteria as applied to an aircraft data measurement system. The physics of criteria is explained, the mathematical description and the sequence of steps for each criterion application is shown. The formula is given for airborne data measurement system state vector posterior estimation based for systems with structure variations.
On Inertial Body Tracking in the Presence of Model Calibration Errors
Miezal, Markus; Taetz, Bertram; Bleser, Gabriele
2016-01-01
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
Increased Error-Related Negativity (ERN) in Childhood Anxiety Disorders: ERP and Source Localization
ERIC Educational Resources Information Center
Ladouceur, Cecile D.; Dahl, Ronald E.; Birmaher, Boris; Axelson, David A.; Ryan, Neal D.
2006-01-01
Background: In this study we used event-related potentials (ERPs) and source localization analyses to track the time course of neural activity underlying response monitoring in children diagnosed with an anxiety disorder compared to age-matched low-risk normal controls. Methods: High-density ERPs were examined following errors on a flanker task…
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.
Hodgson, Catherine; Lambon Ralph, Matthew A
2008-01-01
Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study utilised a novel method- tempo picture naming. Experiment 1 showed that, compared to standard deadline naming tasks, participants made more errors on the tempo picture naming tasks. Further, RTs were longer and more errors were produced to living items than non-living items a pattern seen in both semantic dementia and semantically-impaired stroke aphasic patients. Experiment 2 showed that providing the initial phoneme as a cue enhanced performance whereas providing an incorrect phonemic cue further reduced performance. These results support the contention that the tempo picture naming paradigm reduces the time allowed for controlled semantic processing causing increased error rates. This experimental procedure would, therefore, appear to mimic the performance of aphasic patients with multi-modal semantic impairment that results from poor semantic control rather than the degradation of semantic representations observed in semantic dementia [Jefferies, E. A., & Lambon Ralph, M. A. (2006). Semantic impairment in stoke aphasia vs. semantic dementia: A case-series comparison. Brain, 129, 2132-2147]. Further implications for theories of semantic cognition and models of speech processing are discussed.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
Neural self-tuning adaptive control of non-minimum phase system
NASA Technical Reports Server (NTRS)
Ho, Long T.; Bialasiewicz, Jan T.; Ho, Hai T.
1993-01-01
The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity, if not unstable, closed-loop behavior. Therefore, a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.
NASA Astrophysics Data System (ADS)
Cao, Lu; Li, Hengnian
2016-10-01
For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).
Robust gaze-steering of an active vision system against errors in the estimated parameters
NASA Astrophysics Data System (ADS)
Han, Youngmo
2015-01-01
Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.
The Quantum Socket: Wiring for Superconducting Qubits - Part 3
NASA Astrophysics Data System (ADS)
Mariantoni, M.; Bejianin, J. H.; McConkey, T. G.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.
The implementation of a quantum computer requires quantum error correction codes, which allow to correct errors occurring on physical quantum bits (qubits). Ensemble of physical qubits will be grouped to form a logical qubit with a lower error rate. Reaching low error rates will necessitate a large number of physical qubits. Thus, a scalable qubit architecture must be developed. Superconducting qubits have been used to realize error correction. However, a truly scalable qubit architecture has yet to be demonstrated. A critical step towards scalability is the realization of a wiring method that allows to address qubits densely and accurately. A quantum socket that serves this purpose has been designed and tested at microwave frequencies. In this talk, we show results where the socket is used at millikelvin temperatures to measure an on-chip superconducting resonator. The control electronics is another fundamental element for scalability. We will present a proposal based on the quantum socket to interconnect a classical control hardware to a superconducting qubit hardware, where both are operated at millikelvin temperatures.
Goal-oriented explicit residual-type error estimates in XFEM
NASA Astrophysics Data System (ADS)
Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin
2013-08-01
A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.
Control of Systems With Slow Actuators Using Time Scale Separation
NASA Technical Reports Server (NTRS)
Stepanyan, Vehram; Nguyen, Nhan
2009-01-01
This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.
Radiation-Hardened Solid-State Drive
NASA Technical Reports Server (NTRS)
Sheldon, Douglas J.
2010-01-01
A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.
Ten years of preanalytical monitoring and control: Synthetic Balanced Score Card Indicator
López-Garrigós, Maite; Flores, Emilio; Santo-Quiles, Ana; Gutierrez, Mercedes; Lugo, Javier; Lillo, Rosa; Leiva-Salinas, Carlos
2015-01-01
Introduction Preanalytical control and monitoring continue to be an important issue for clinical laboratory professionals. The aim of the study was to evaluate a monitoring system of preanalytical errors regarding not suitable samples for analysis, based on different indicators; to compare such indicators in different phlebotomy centres; and finally to evaluate a single synthetic preanalytical indicator that may be included in the balanced scorecard management system (BSC). Materials and methods We collected individual and global preanalytical errors in haematology, coagulation, chemistry, and urine samples analysis. We also analyzed a synthetic indicator that represents the sum of all types of preanalytical errors, expressed in a sigma level. We studied the evolution of those indicators over time and compared indicator results by way of the comparison of proportions and Chi-square. Results There was a decrease in the number of errors along the years (P < 0.001). This pattern was confirmed in primary care patients, inpatients and outpatients. In blood samples, fewer errors occurred in outpatients, followed by inpatients. Conclusion We present a practical and effective methodology to monitor unsuitable sample preanalytical errors. The synthetic indicator results summarize overall preanalytical sample errors, and can be used as part of BSC management system. PMID:25672466
NASA Astrophysics Data System (ADS)
Shinoda, Masahisa; Nakatani, Hidehiko; Nakai, Kenya; Ohmaki, Masayuki
2015-09-01
We theoretically calculate behaviors of focusing error signals generated by an astigmatic method in a land-groove-type optical disk. The focusing error signal from the land does not coincide with that from the groove. This behavior is enhanced when a focused spot of an optical pickup moves beyond the radius of the optical disk. A gain difference between the slope sensitivities of focusing error signals from the land and the groove is an important factor with respect to stable focusing servo control. In our calculation, the format of digital versatile disc-random access memory (DVD-RAM) is adopted as the land-groove-type optical disk model, and the dependences of the gain difference on various factors are investigated. The gain difference strongly depends on the optical intensity distribution of the laser beam in the optical pickup. The calculation method and results in this paper will be reflected in newly developed land-groove-type optical disks.
Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method
Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni
2017-01-01
The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508
Sensitivity analysis of non-cohesive sediment transport formulae
NASA Astrophysics Data System (ADS)
Pinto, Lígia; Fortunato, André B.; Freire, Paula
2006-10-01
Sand transport models are often based on semi-empirical equilibrium transport formulae that relate sediment fluxes to physical properties such as velocity, depth and characteristic sediment grain sizes. In engineering applications, errors in these physical properties affect the accuracy of the sediment fluxes. The present analysis quantifies error propagation from the input physical properties to the sediment fluxes, determines which ones control the final errors, and provides insight into the relative strengths, weaknesses and limitations of four total load formulae (Ackers and White, Engelund and Hansen, van Rijn, and Karim and Kennedy) and one bed load formulation (van Rijn). The various sources of uncertainty are first investigated individually, in order to pinpoint the key physical properties that control the errors. Since the strong non-linearity of most sand transport formulae precludes analytical approaches, a Monte Carlo method is validated and used in the analysis. Results show that the accuracy in total sediment transport evaluations is mainly determined by errors in the current velocity and in the sediment median grain size. For the bed load transport using the van Rijn formula, errors in the current velocity alone control the final accuracy. In a final set of tests, all physical properties are allowed to vary simultaneously in order to analyze the combined effect of errors. The combined effect of errors in all the physical properties is then compared to an estimate of the errors due to the intrinsic limitations of the formulae. Results show that errors in the physical properties can be dominant for typical uncertainties associated with these properties, particularly for small depths. A comparison between the various formulae reveals that the van Rijn formula is more sensitive to basic physical properties. Hence, it should only be used when physical properties are known with precision.
Calibrated Bayes Factors Should Not Be Used: A Reply to Hoijtink, van Kooten, and Hulsker.
Morey, Richard D; Wagenmakers, Eric-Jan; Rouder, Jeffrey N
2016-01-01
Hoijtink, Kooten, and Hulsker ( 2016 ) present a method for choosing the prior distribution for an analysis with Bayes factor that is based on controlling error rates, which they advocate as an alternative to our more subjective methods (Morey & Rouder, 2014 ; Rouder, Speckman, Sun, Morey, & Iverson, 2009 ; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011 ). We show that the method they advocate amounts to a simple significance test, and that the resulting Bayes factors are not interpretable. Additionally, their method fails in common circumstances, and has the potential to yield arbitrarily high Type II error rates. After critiquing their method, we outline the position on subjectivity that underlies our advocacy of Bayes factors.
ERIC Educational Resources Information Center
Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung
2014-01-01
The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
A new optical head tracing reflected light for nanoprofiler
NASA Astrophysics Data System (ADS)
Okuda, K.; Okita, K.; Tokuta, Y.; Kitayama, T.; Nakano, M.; Kudo, R.; Yamamura, K.; Endo, K.
2014-09-01
High accuracy optical elements are applied in various fields. For example, ultraprecise aspherical mirrors are necessary for developing third-generation synchrotron radiation and XFEL (X-ray Free Electron LASER) sources. In order to make such high accuracy optical elements, it is necessary to realize the measurement of aspherical mirrors with high accuracy. But there has been no measurement method which simultaneously achieves these demands yet. So, we develop the nanoprofiler that can directly measure the any surfaces figures with high accuracy. The nanoprofiler gets the normal vector and the coordinate of a measurement point with using LASER and the QPD (Quadrant Photo Diode) as a detector. And, from the normal vectors and their coordinates, the three-dimensional figure is calculated. In order to measure the figure, the nanoprofiler controls its five motion axis numerically to make the reflected light enter to the QPD's center. The control is based on the sample's design formula. We measured a concave spherical mirror with a radius of curvature of 400 mm by the deflection method which calculates the figure error from QPD's output, and compared the results with those using a Fizeau interferometer. The profile was consistent within the range of system error. The deflection method can't neglect the error caused from the QPD's spatial irregularity of sensitivity. In order to improve it, we have contrived the zero method which moves the QPD by the piezoelectric motion stage and calculates the figure error from the displacement.
Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug
2011-05-01
Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.
Intergration of system identification and robust controller designs for flexible structures in space
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Lew, Jiann-Shiun
1990-01-01
An approach is developed using experimental data to identify a reduced-order model and its model error for a robust controller design. There are three steps involved in the approach. First, an approximately balanced model is identified using the Eigensystem Realization Algorithm, which is an identification algorithm. Second, the model error is calculated and described in frequency domain in terms of the H(infinity) norm. Third, a pole placement technique in combination with a H(infinity) control method is applied to design a controller for the considered system. A set experimental data from an existing setup, namely the Mini-Mast system, is used to illustrate and verify the approach.
Multi-Criteria Decision Making Approaches for Quality Control of Genome-Wide Association Studies
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-01-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter’s preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter’s choices thus providing more reproducible results. PMID:21347174
ERIC Educational Resources Information Center
Preston, Jonathan L.; Felsenfeld, Susan; Frost, Stephen J.; Mencl, W. Einar; Fulbright, Robert K.; Grigorenko, Elena L.; Landi, Nicole; Seki, Ayumi; Pugh, Kenneth R.
2012-01-01
Purpose: To examine neural response to spoken and printed language in children with speech sound errors (SSE). Method: Functional magnetic resonance imaging was used to compare processing of auditorily and visually presented words and pseudowords in 17 children with SSE, ages 8;6[years;months] through 10;10, with 17 matched controls. Results: When…
An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.
Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao
2016-09-01
The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. © 2016 Society for Laboratory Automation and Screening.
Li, Zhaoying; Zhou, Wenjie; Liu, Hao
2016-09-01
This paper addresses the nonlinear robust tracking controller design problem for hypersonic vehicles. This problem is challenging due to strong coupling between the aerodynamics and the propulsion system, and the uncertainties involved in the vehicle dynamics including parametric uncertainties, unmodeled model uncertainties, and external disturbances. By utilizing the feedback linearization technique, a linear tracking error system is established with prescribed references. For the linear model, a robust controller is proposed based on the signal compensation theory to guarantee that the tracking error dynamics is robustly stable. Numerical simulation results are given to show the advantages of the proposed nonlinear robust control method, compared to the robust loop-shaping control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
An empirical study of flight control software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.
A novel auto-tuning PID control mechanism for nonlinear systems.
Cetin, Meric; Iplikci, Serdar
2015-09-01
In this paper, a novel Runge-Kutta (RK) discretization-based model-predictive auto-tuning proportional-integral-derivative controller (RK-PID) is introduced for the control of continuous-time nonlinear systems. The parameters of the PID controller are tuned using RK model of the system through prediction error-square minimization where the predicted information of tracking error provides an enhanced tuning of the parameters. Based on the model-predictive control (MPC) approach, the proposed mechanism provides necessary PID parameter adaptations while generating additive correction terms to assist the initially inadequate PID controller. Efficiency of the proposed mechanism has been tested on two experimental real-time systems: an unstable single-input single-output (SISO) nonlinear magnetic-levitation system and a nonlinear multi-input multi-output (MIMO) liquid-level system. RK-PID has been compared to standard PID, standard nonlinear MPC (NMPC), RK-MPC and conventional sliding-mode control (SMC) methods in terms of control performance, robustness, computational complexity and design issue. The proposed mechanism exhibits acceptable tuning and control performance with very small steady-state tracking errors, and provides very short settling time for parameter convergence. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Morey, Susan; Prevot, Thomas; Mercer, Joey; Martin, Lynne; Bienert, Nancy; Cabrall, Christopher; Hunt, Sarah; Homola, Jeffrey; Kraut, Joshua
2013-01-01
A human-in-the-loop simulation was conducted to examine the effects of varying levels of trajectory prediction uncertainty on air traffic controller workload and performance, as well as how strategies and the use of decision support tools change in response. This paper focuses on the strategies employed by two controllers from separate teams who worked in parallel but independently under identical conditions (airspace, arrival traffic, tools) with the goal of ensuring schedule conformance and safe separation for a dense arrival flow in en route airspace. Despite differences in strategy and methods, both controllers achieved high levels of schedule conformance and safe separation. Overall, results show that trajectory uncertainties introduced by wind and aircraft performance prediction errors do not affect the controllers' ability to manage traffic. Controller strategies were fairly robust to changes in error, though strategies were affected by the amount of delay to absorb (scheduled time of arrival minus estimated time of arrival). Using the results and observations, this paper proposes an ability to dynamically customize the display of information including delay time based on observed error to better accommodate different strategies and objectives.
Liu, Yanni; Gehring, William J.; Weissman, Daniel H.; Taylor, Stephan F.; Fitzgerald, Kate Dimond
2012-01-01
Background: Impairments of cognitive control have been theorized to drive the repetitive thoughts and behaviors of obsessive compulsive disorder (OCD) from early in the course of illness. However, it remains unclear whether altered trial-by-trial adjustments of cognitive control characterize young patients. To test this hypothesis, we determined whether trial-by-trial adjustments of cognitive control are altered in children with OCD, relative to healthy controls. Methods: Forty-eight patients with pediatric OCD and 48 healthy youth performed the Multi-Source Interference Task. Two types of trial-by-trial adjustments of cognitive control were examined: post-error slowing (i.e., slower responses after errors than after correct trials) and post-conflict adaptation (i.e., faster responses in high-conflict incongruent trials that are preceded by other high-conflict incongruent trials, relative to low-conflict congruent trials). Results: While healthy youth exhibited both post-error slowing and post-conflict adaptation, patients with pediatric OCD failed to exhibit either of these effects. Further analyses revealed that patients with low symptom severity showed a reversal of the post-conflict adaptation effect, whereas patients with high symptom severity did not show any post-conflict adaptation. Conclusion: Two types of trial-by-trial adjustments of cognitive control are altered in pediatric OCD. These abnormalities may serve as early markers of the illness. PMID:22593744
Coherent detection of position errors in inter-satellite laser communications
NASA Astrophysics Data System (ADS)
Xu, Nan; Liu, Liren; Liu, De'an; Sun, Jianfeng; Luan, Zhu
2007-09-01
Due to the improved receiver sensitivity and wavelength selectivity, coherent detection became an attractive alternative to direct detection in inter-satellite laser communications. A novel method to coherent detection of position errors information is proposed. Coherent communication system generally consists of receive telescope, local oscillator, optical hybrid, photoelectric detector and optical phase lock loop (OPLL). Based on the system composing, this method adds CCD and computer as position error detector. CCD captures interference pattern while detection of transmission data from the transmitter laser. After processed and analyzed by computer, target position information is obtained from characteristic parameter of the interference pattern. The position errors as the control signal of PAT subsystem drive the receiver telescope to keep tracking to the target. Theoretical deviation and analysis is presented. The application extends to coherent laser rang finder, in which object distance and position information can be obtained simultaneously.
A rate-controlled teleoperator task with simulated transport delays
NASA Technical Reports Server (NTRS)
Pennington, J. E.
1983-01-01
A teleoperator-system simulation was used to examine the effects of two control modes (joint-by-joint and resolved-rate), a proximity-display method, and time delays (up to 2 sec) on the control of a five-degree-of-freedom manipulator performing a probe-in-hole alignment task. Four subjects used proportional rotational control and discrete (on-off) translation control with computer-generated visual displays. The proximity display enabled subjects to separate rotational errors from displacement (translation) errors; thus, when the proximity display was used with resolved-rate control, the simulated task was trivial. The time required to perform the simulated task increased linearly with time delay, but time delays had no effect on alignment accuracy. Based on the results of this simulation, several future studies are recommended.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
NASA Technical Reports Server (NTRS)
Buechler, W.; Tucker, A. G.
1981-01-01
Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2008-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2010-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Importance of implementing an analytical quality control system in a core laboratory.
Marques-Garcia, F; Garcia-Codesal, M F; Caro-Narros, M R; Contreras-SanFeliciano, T
2015-01-01
The aim of the clinical laboratory is to provide useful information for screening, diagnosis and monitoring of disease. The laboratory should ensure the quality of extra-analytical and analytical process, based on set criteria. To do this, it develops and implements a system of internal quality control, designed to detect errors, and compare its data with other laboratories, through external quality control. In this way it has a tool to detect the fulfillment of the objectives set, and in case of errors, allowing corrective actions to be made, and ensure the reliability of the results. This article sets out to describe the design and implementation of an internal quality control protocol, as well as its periodical assessment intervals (6 months) to determine compliance with pre-determined specifications (Stockholm Consensus(1)). A total of 40 biochemical and 15 immunochemical methods were evaluated using three different control materials. Next, a standard operation procedure was planned to develop a system of internal quality control that included calculating the error of the analytical process, setting quality specifications, and verifying compliance. The quality control data were then statistically depicted as means, standard deviations, and coefficients of variation, as well as systematic, random, and total errors. The quality specifications were then fixed and the operational rules to apply in the analytical process were calculated. Finally, our data were compared with those of other laboratories through an external quality assurance program. The development of an analytical quality control system is a highly structured process. This should be designed to detect errors that compromise the stability of the analytical process. The laboratory should review its quality indicators, systematic, random and total error at regular intervals, in order to ensure that they are meeting pre-determined specifications, and if not, apply the appropriate corrective actions. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
Improved Sampling Method Reduces Isokinetic Sampling Errors.
ERIC Educational Resources Information Center
Karels, Gale G.
The particulate sampling system currently in use by the Bay Area Air Pollution Control District, San Francisco, California is described in this presentation for the 12th Conference on Methods in Air Pollution and Industrial Hygiene Studies, University of Southern California, April, 1971. The method represents a practical, inexpensive tool that can…
[Design and accuracy analysis of upper slicing system of MSCT].
Jiang, Rongjian
2013-05-01
The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.
NASA Astrophysics Data System (ADS)
Li, L. L.; Jin, C. L.; Ge, X.
2018-01-01
In this paper, the output regulation problem with dissipative property for a class of switched stochastic delay systems is investigated, based on an error-dependent switching law. Under the assumption that none subsystem is solvable for the problem, a sufficient condition is derived by structuring multiple Lyapunov-Krasovskii functionals with respect to multiple supply rates, via designing error feedback regulators. The condition is also established when dissipative property reduces to passive property. Finally, two numerical examples are given to demonstrate the feasibility and efficiency of the present method.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
High-density force myography: A possible alternative for upper-limb prosthetic control.
Radmand, Ashkan; Scheme, Erik; Englehart, Kevin
2016-01-01
Several multiple degree-of-freedom upper-limb prostheses that have the promise of highly dexterous control have recently been developed. Inadequate controllability, however, has limited adoption of these devices. Introducing more robust control methods will likely result in higher acceptance rates. This work investigates the suitability of using high-density force myography (HD-FMG) for prosthetic control. HD-FMG uses a high-density array of pressure sensors to detect changes in the pressure patterns between the residual limb and socket caused by the contraction of the forearm muscles. In this work, HD-FMG outperforms the standard electromyography (EMG)-based system in detecting different wrist and hand gestures. With the arm in a fixed, static position, eight hand and wrist motions were classified with 0.33% error using the HD-FMG technique. Comparatively, classification errors in the range of 2.2%-11.3% have been reported in the literature for multichannel EMG-based approaches. As with EMG, position variation in HD-FMG can introduce classification error, but incorporating position variation into the training protocol reduces this effect. Channel reduction was also applied to the HD-FMG technique to decrease the dimensionality of the problem as well as the size of the sensorized area. We found that with informed, symmetric channel reduction, classification error could be decreased to 0.02%.
A panning DLT procedure for three-dimensional videography.
Yu, B; Koh, T J; Hay, J G
1993-06-01
The direct linear transformation (DLT) method [Abdel-Aziz and Karara, APS Symposium on Photogrammetry. American Society of Photogrammetry, Falls Church, VA (1971)] is widely used in biomechanics to obtain three-dimensional space coordinates from film and video records. This method has some major shortcomings when used to analyze events which take place over large areas. To overcome these shortcomings, a three-dimensional data collection method based on the DLT method, and making use of panning cameras, was developed. Several small single control volumes were combined to construct a large total control volume. For each single control volume, a regression equation (calibration equation) is developed to express each of the 11 DLT parameters as a function of camera orientation, so that the DLT parameters can then be estimated from arbitrary camera orientations. Once the DLT parameters are known for at least two cameras, and the associated two-dimensional film or video coordinates of the event are obtained, the desired three-dimensional space coordinates can be computed. In a laboratory test, five single control volumes (in a total control volume of 24.40 x 2.44 x 2.44 m3) were used to test the effect of the position of the single control volume on the accuracy of the computed three dimensional space coordinates. Linear and quadratic calibration equations were used to test the effect of the order of the equation on the accuracy of the computed three dimensional space coordinates. For four of the five single control volumes tested, the mean resultant errors associated with the use of the linear calibration equation were significantly larger than those associated with the use of the quadratic calibration equation. The position of the single control volume had no significant effect on the mean resultant errors in computed three dimensional coordinates when the quadratic calibration equation was used. Under the same data collection conditions, the mean resultant errors in the computed three dimensional coordinates associated with the panning and stationary DLT methods were 17 and 22 mm, respectively. The major advantages of the panning DLT method lie in the large image sizes obtained and in the ease with which the data can be collected. The method also has potential for use in a wide variety of contexts. The major shortcoming of the method is the large amount of digitizing necessary to calibrate the total control volume. Adaptations of the method to reduce the amount of digitizing required are being explored.
Development of an adaptive hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1994-01-01
In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.
Wang, Yingyang; Hu, Jianbo
2018-05-19
An improved prescribed performance controller is proposed for the longitudinal model of an air-breathing hypersonic vehicle (AHV) subject to uncertain dynamics and input nonlinearity. Different from the traditional non-affine model requiring non-affine functions to be differentiable, this paper utilizes a semi-decomposed non-affine model with non-affine functions being locally semi-bounded and possibly in-differentiable. A new error transformation combined with novel prescribed performance functions is proposed to bypass complex deductions caused by conventional error constraint approaches and circumvent high frequency chattering in control inputs. On the basis of backstepping technique, the improved prescribed performance controller with low structural and computational complexity is designed. The methodology guarantees the altitude and velocity tracking error within transient and steady state performance envelopes and presents excellent robustness against uncertain dynamics and deadzone input nonlinearity. Simulation results demonstrate the efficacy of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu
2017-03-01
In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.
Frequency-Offset Cartesian Feedback Based on Polyphase Difference Amplifiers
Zanchi, Marta G.; Pauly, John M.; Scott, Greig C.
2010-01-01
A modified Cartesian feedback method called “frequency-offset Cartesian feedback” and based on polyphase difference amplifiers is described that significantly reduces the problems associated with quadrature errors and DC-offsets in classic Cartesian feedback power amplifier control systems. In this method, the reference input and feedback signals are down-converted and compared at a low intermediate frequency (IF) instead of at DC. The polyphase difference amplifiers create a complex control bandwidth centered at this low IF, which is typically offset from DC by 200–1500 kHz. Consequently, the loop gain peak does not overlap DC where voltage offsets, drift, and local oscillator leakage create errors. Moreover, quadrature mismatch errors are significantly attenuated in the control bandwidth. Since the polyphase amplifiers selectively amplify the complex signals characterized by a +90° phase relationship representing positive frequency signals, the control system operates somewhat like single sideband (SSB) modulation. However, the approach still allows the same modulation bandwidth control as classic Cartesian feedback. In this paper, the behavior of the polyphase difference amplifier is described through both the results of simulations, based on a theoretical analysis of their architecture, and experiments. We then describe our first printed circuit board prototype of a frequency-offset Cartesian feedback transmitter and its performance in open and closed loop configuration. This approach should be especially useful in magnetic resonance imaging transmit array systems. PMID:20814450
DOE Office of Scientific and Technical Information (OSTI.GOV)
R.I. Rudyka; Y.E. Zingerman; K.G. Lavrov
Up-to-date mathematical methods, such as correlation analysis and expert systems, are employed in creating a model of the coking process. Automatic coking-control systems developed by Giprokoks rule out human error. At an existing coke battery, after introducing automatic control, the heating-gas consumption is reduced by {>=}5%.
Color coding of control room displays: the psychocartography of visual layering effects.
Van Laar, Darren; Deshe, Ofer
2007-06-01
To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).
NASA Technical Reports Server (NTRS)
Webb, L. D.; Washington, H. P.
1972-01-01
Static pressure position error calibrations for a compensated and an uncompensated XB-70 nose boom pitot static probe were obtained in flight. The methods (Pacer, acceleration-deceleration, and total temperature) used to obtain the position errors over a Mach number range from 0.5 to 3.0 and an altitude range from 25,000 feet to 70,000 feet are discussed. The error calibrations are compared with the position error determined from wind tunnel tests, theoretical analysis, and a standard NACA pitot static probe. Factors which influence position errors, such as angle of attack, Reynolds number, probe tip geometry, static orifice location, and probe shape, are discussed. Also included are examples showing how the uncertainties caused by position errors can affect the inlet controls and vertical altitude separation of a supersonic transport.
A simplex method for the orbit determination of maneuvering satellites
NASA Astrophysics Data System (ADS)
Chen, JianRong; Li, JunFeng; Wang, XiJing; Zhu, Jun; Wang, DanNa
2018-02-01
A simplex method of orbit determination (SMOD) is presented to solve the problem of orbit determination for maneuvering satellites subject to small and continuous thrust. The objective function is established as the sum of the nth powers of the observation errors based on global positioning satellite (GPS) data. The convergence behavior of the proposed method is analyzed using a range of initial orbital parameter errors and n values to ensure the rapid and accurate convergence of the SMOD. For an uncontrolled satellite, the orbit obtained by the SMOD provides a position error compared with GPS data that is commensurate with that obtained by the least squares technique. For low Earth orbit satellite control, the precision of the acceleration produced by a small pulse thrust is less than 0.1% compared with the calibrated value. The orbit obtained by the SMOD is also compared with weak GPS data for a geostationary Earth orbit satellite over several days. The results show that the position accuracy is within 12.0 m. The working efficiency of the electric propulsion is about 67% compared with the designed value. The analyses provide the guidance for subsequent satellite control. The method is suitable for orbit determination of maneuvering satellites subject to small and continuous thrust.
Iturrate, Iñaki; Grizou, Jonathan; Omedes, Jason; Oudeyer, Pierre-Yves; Lopes, Manuel; Montesano, Luis
2015-01-01
This paper presents a new approach for self-calibration BCI for reaching tasks using error-related potentials. The proposed method exploits task constraints to simultaneously calibrate the decoder and control the device, by using a robust likelihood function and an ad-hoc planner to cope with the large uncertainty resulting from the unknown task and decoder. The method has been evaluated in closed-loop online experiments with 8 users using a previously proposed BCI protocol for reaching tasks over a grid. The results show that it is possible to have a usable BCI control from the beginning of the experiment without any prior calibration. Furthermore, comparisons with simulations and previous results obtained using standard calibration hint that both the quality of recorded signals and the performance of the system were comparable to those obtained with a standard calibration approach. PMID:26131890
Generalized Background Error covariance matrix model (GEN_BE v2.0)
NASA Astrophysics Data System (ADS)
Descombes, G.; Auligné, T.; Vandenberghe, F.; Barker, D. M.
2014-07-01
The specification of state background error statistics is a key component of data assimilation since it affects the impact observations will have on the analysis. In the variational data assimilation approach, applied in geophysical sciences, the dimensions of the background error covariance matrix (B) are usually too large to be explicitly determined and B needs to be modeled. Recent efforts to include new variables in the analysis such as cloud parameters and chemical species have required the development of the code to GENerate the Background Errors (GEN_BE) version 2.0 for the Weather Research and Forecasting (WRF) community model to allow for a simpler, flexible, robust, and community-oriented framework that gathers methods used by meteorological operational centers and researchers. We present the advantages of this new design for the data assimilation community by performing benchmarks and showing some of the new features on data assimilation test cases. As data assimilation for clouds remains a challenge, we present a multivariate approach that includes hydrometeors in the control variables and new correlated errors. In addition, the GEN_BE v2.0 code is employed to diagnose error parameter statistics for chemical species, which shows that it is a tool flexible enough to involve new control variables. While the generation of the background errors statistics code has been first developed for atmospheric research, the new version (GEN_BE v2.0) can be easily extended to other domains of science and be chosen as a testbed for diagnostic and new modeling of B. Initially developed for variational data assimilation, the model of the B matrix may be useful for variational ensemble hybrid methods as well.
Generalized background error covariance matrix model (GEN_BE v2.0)
NASA Astrophysics Data System (ADS)
Descombes, G.; Auligné, T.; Vandenberghe, F.; Barker, D. M.; Barré, J.
2015-03-01
The specification of state background error statistics is a key component of data assimilation since it affects the impact observations will have on the analysis. In the variational data assimilation approach, applied in geophysical sciences, the dimensions of the background error covariance matrix (B) are usually too large to be explicitly determined and B needs to be modeled. Recent efforts to include new variables in the analysis such as cloud parameters and chemical species have required the development of the code to GENerate the Background Errors (GEN_BE) version 2.0 for the Weather Research and Forecasting (WRF) community model. GEN_BE allows for a simpler, flexible, robust, and community-oriented framework that gathers methods used by some meteorological operational centers and researchers. We present the advantages of this new design for the data assimilation community by performing benchmarks of different modeling of B and showing some of the new features in data assimilation test cases. As data assimilation for clouds remains a challenge, we present a multivariate approach that includes hydrometeors in the control variables and new correlated errors. In addition, the GEN_BE v2.0 code is employed to diagnose error parameter statistics for chemical species, which shows that it is a tool flexible enough to implement new control variables. While the generation of the background errors statistics code was first developed for atmospheric research, the new version (GEN_BE v2.0) can be easily applied to other domains of science and chosen to diagnose and model B. Initially developed for variational data assimilation, the model of the B matrix may be useful for variational ensemble hybrid methods as well.
A Comparison of Exposure Control Procedures in CATs Using the 3PL Model
ERIC Educational Resources Information Center
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.
2013-01-01
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Design the RS(255,239) encoder and interleaving in the space laser communication system
NASA Astrophysics Data System (ADS)
Lang, Yue; Tong, Shou-feng
2013-08-01
Space laser communication is researched by more and more countries. Space laser communication deserves to be researched. We can acquire higher transmission speed and better transmission quality between satellite and satellite, satellite and earth by setting up laser link. But in the space laser communication system,the reliability is under influences of many factors of atmosphere,detector noise, optical platform jitter and other factors. The intensity of the signal which is attenuated because of the long transmission distance is demanded to have higher intensity to acquire low BER. The channel code technology can enhance the anti-interference ability of the system. The theory of channel coding technology is that some redundancies is added to information codes. So it can make use of the checkout polynomial to correct errors at the sink port. It help the system to get low BER rate and coding gain. Reed-Solomon (RS) code is one of the channel code, and it is one kind of multi-ary BCH code, and it can correct both burst errors and random errors, and it is widely used in the error-control schemes. The new method of the RS encoder and interleaving based on the FPGA is proposed, aiming at satisfying the needs of the widely-used error control technology in the space laser communication field. An improved method for Finite Galois Field multiplier of encoding is proposed, and it is suitable for FPGA implementation. Comparison of the XOR gates cost between the optimization and original, the number of XOR gates is lessen more than 40% .Then give a new structure of interleaving by using the FPGA. By controlling the in-data stream and out-data stream of encoder, the asynchronous process of the whole frame is accomplished, while by using multi-level pipeline, the real-time transfer of the data is achieved. By controlling the read-address and write-address of the block RAM, the interleaving operation of the arbitrary depth is synchronously implemented. Compared with the normal method, it could reduce the complexity of the channel encoder and the hardware requirement effectively.
Mass casualty tracking with air traffic control methodologies.
Hoskins, Jason D; Graham, Ross F; Robinson, Duane R; Lutz, Clifford C; Folio, Les R
2009-06-01
An intrahospital casualty throughput system modeled after air traffic control (ATC) tracking procedures was tested in mass casualty exercises. ATC uses a simple tactile process involving informational progress strips representing each aircraft, which are held in bays representing each stage of flight to prioritize and manage aircraft. These strips can be reordered within the bays to indicate a change in priority of aircraft sequence. In this study, a similar system was designed for patient tracking. We compared the ATC model and traditional casualty tracking methods of paper and clipboard in 18 four-hour casualty scenarios, each with 5 to 30 mock casualties. The experimental and control groups were alternated to maximize exposure and minimize training effects. Results were analyzed with Mann-Whitney statistical analysis with p value < 0.05 (two-sided). The ATC method had significantly (p = 0.017) fewer errors in critical patient data (eg, name, social security number, diagnosis). Specifically, the ATC method better tracked the mechanism of injury, working diagnosis, and disposition of patients. The ATC method also performed considerably better with patient accountability during mass casualty scenarios. Data strips were comparable with the control method in terms of ease of use. In addition, participants preferred the ATC method to the control (p = 0.003) and preferred using the ATC method (p = 0.003) to traditional methods in the future. The ATC model more effectively tracked patient data with fewer errors when compared with the clipboard method. Application of these principles can enhance trauma management and can have application in civilian and military trauma centers and emergency rooms.
Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George
2018-01-01
Abstract Background Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. Methods We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Results Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Conclusions Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present. PMID:29088358
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
2017-07-14
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less
Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.
2014-01-01
Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system. Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System spacecraft system.Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.
NASA Technical Reports Server (NTRS)
Groves, Curtis E.
2013-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This proposal describes an approach to validate the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft. The research described here is absolutely cutting edge. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional"validation by test only'' mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computationaf Fluid Dynamics can be used to veritY these requirements; however, the model must be validated by test data. The proposed research project includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT and OPEN FOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid . . . Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system. Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. To date, the author is the only person to look at the uncertainty in the entire computational domain. For the flow regime being analyzed (turbulent, threedimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.
Automatically generated acceptance test: A software reliability experiment
NASA Technical Reports Server (NTRS)
Protzel, Peter W.
1988-01-01
This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.
Slotnick, Scott D
2017-07-01
Analysis of functional magnetic resonance imaging (fMRI) data typically involves over one hundred thousand independent statistical tests; therefore, it is necessary to correct for multiple comparisons to control familywise error. In a recent paper, Eklund, Nichols, and Knutsson used resting-state fMRI data to evaluate commonly employed methods to correct for multiple comparisons and reported unacceptable rates of familywise error. Eklund et al.'s analysis was based on the assumption that resting-state fMRI data reflect null data; however, their 'null data' actually reflected default network activity that inflated familywise error. As such, Eklund et al.'s results provide no basis to question the validity of the thousands of published fMRI studies that have corrected for multiple comparisons or the commonly employed methods to correct for multiple comparisons.
Suppressing relaxation in superconducting qubits by quasiparticle pumping.
Gustavsson, Simon; Yan, Fei; Catelani, Gianluigi; Bylander, Jonas; Kamal, Archana; Birenbaum, Jeffrey; Hover, David; Rosenberg, Danna; Samach, Gabriel; Sears, Adam P; Weber, Steven J; Yoder, Jonilyn L; Clarke, John; Kerman, Andrew J; Yoshihara, Fumiki; Nakamura, Yasunobu; Orlando, Terry P; Oliver, William D
2016-12-23
Dynamical error suppression techniques are commonly used to improve coherence in quantum systems. They reduce dephasing errors by applying control pulses designed to reverse erroneous coherent evolution driven by environmental noise. However, such methods cannot correct for irreversible processes such as energy relaxation. We investigate a complementary, stochastic approach to reducing errors: Instead of deterministically reversing the unwanted qubit evolution, we use control pulses to shape the noise environment dynamically. In the context of superconducting qubits, we implement a pumping sequence to reduce the number of unpaired electrons (quasiparticles) in close proximity to the device. A 70% reduction in the quasiparticle density results in a threefold enhancement in qubit relaxation times and a comparable reduction in coherence variability. Copyright © 2016, American Association for the Advancement of Science.
Robust adaptive kinematic control of redundant robots
NASA Technical Reports Server (NTRS)
Tarokh, M.; Zuck, D. D.
1992-01-01
The paper presents a general method for the resolution of redundancy that combines the Jacobian pseudoinverse and augmentation approaches. A direct adaptive control scheme is developed to generate joint angle trajectories for achieving desired end-effector motion as well as additional user defined tasks. The scheme ensures arbitrarily small errors between the desired and the actual motion of the manipulator. Explicit bounds on the errors are established that are directly related to the mismatch between actual and estimated pseudoinverse Jacobian matrix, motion velocity and the controller gain. It is shown that the scheme is tolerant of the mismatch and consequently only infrequent pseudoinverse computations are needed during a typical robot motion. As a result, the scheme is computationally fast, and can be implemented for real-time control of redundant robots. A method is incorporated to cope with the robot singularities allowing the manipulator to get very close or even pass through a singularity while maintaining a good tracking performance and acceptable joint velocities. Computer simulations and experimental results are provided in support of the theoretical developments.
Modeling and controller design of a 6-DOF precision positioning system
NASA Astrophysics Data System (ADS)
Cai, Kunhai; Tian, Yanling; Liu, Xianping; Fatikow, Sergej; Wang, Fujun; Cui, Liangyu; Zhang, Dawei; Shirinzadeh, Bijan
2018-05-01
A key hurdle to meet the needs of micro/nano manipulation in some complex cases is the inadequate workspace and flexibility of the operation ends. This paper presents a 6-degree of freedom (DOF) serial-parallel precision positioning system, which consists of two compact type 3-DOF parallel mechanisms. Each parallel mechanism is driven by three piezoelectric actuators (PEAs), guided by three symmetric T-shape hinges and three elliptical flexible hinges, respectively. It can extend workspace and improve flexibility of the operation ends. The proposed system can be assembled easily, which will greatly reduce the assembly errors and improve the positioning accuracy. In addition, the kinematic and dynamic model of the 6-DOF system are established, respectively. Furthermore, in order to reduce the tracking error and improve the positioning accuracy, the Discrete-time Model Predictive Controller (DMPC) is applied as an effective control method. Meanwhile, the effectiveness of the DMCP control method is verified. Finally, the tracking experiment is performed to verify the tracking performances of the 6-DOF stage.
High-accuracy resolver-to-digital conversion via phase locked loop based on PID controller
NASA Astrophysics Data System (ADS)
Li, Yaoling; Wu, Zhong
2018-03-01
The problem of resolver-to-digital conversion (RDC) is transformed into the problem of angle tracking control, and a phase locked loop (PLL) method based on PID controller is proposed in this paper. This controller comprises a typical PI controller plus an incomplete differential which can avoid the amplification of higher-frequency noise components by filtering the phase detection error with a low-pass filter. Compared with conventional ones, the proposed PLL method makes the converter a system of type III and thus the conversion accuracy can be improved. Experimental results demonstrate the effectiveness of the proposed method.
Fisher, Moria E; Huang, Felix C; Wright, Zachary A; Patton, James L
2014-01-01
Manipulation of error feedback has been of great interest to recent studies in motor control and rehabilitation. Typically, motor adaptation is shown as a change in performance with a single scalar metric for each trial, yet such an approach might overlook details about how error evolves through the movement. We believe that statistical distributions of movement error through the extent of the trajectory can reveal unique patterns of adaption and possibly reveal clues to how the motor system processes information about error. This paper describes different possible ordinate domains, focusing on representations in time and state-space, used to quantify reaching errors. We hypothesized that the domain with the lowest amount of variability would lead to a predictive model of reaching error with the highest accuracy. Here we showed that errors represented in a time domain demonstrate the least variance and allow for the highest predictive model of reaching errors. These predictive models will give rise to more specialized methods of robotic feedback and improve previous techniques of error augmentation.
Stevens, Allen D; Hernandez, Caleb; Jones, Seth; Moreira, Maria E; Blumen, Jason R; Hopkins, Emily; Sande, Margaret; Bakes, Katherine; Haukoos, Jason S
2015-11-01
Medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients where dosing often requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national healthcare priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared to conventional medication administration, in simulated prehospital pediatric resuscitation scenarios. We performed a prospective, block-randomized, cross-over study, where 10 full-time paramedics each managed two simulated pediatric arrests in situ using either prefilled, color-coded syringes (intervention) or their own medication kits stocked with conventional ampoules (control). Each paramedic was paired with two emergency medical technicians to provide ventilations and compressions as directed. The ambulance patient compartment and the intravenous medication port were video recorded. Data were extracted from video review by blinded, independent reviewers. Median time to delivery of all doses for the intervention and control groups was 34 (95% CI: 28-39) seconds and 42 (95% CI: 36-51) seconds, respectively (difference=9 [95% CI: 4-14] seconds). Using the conventional method, 62 doses were administered with 24 (39%) critical dosing errors; using the prefilled, color-coded syringe method, 59 doses were administered with 0 (0%) critical dosing errors (difference=39%, 95% CI: 13-61%). A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by paramedics during simulated prehospital pediatric resuscitations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.
Song, Ci; Dai, Yifan; Peng, Xiaoqiang
2010-07-01
Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.
Trajectory specification for high capacity air traffic control
NASA Technical Reports Server (NTRS)
Paielli, Russell A. (Inventor)
2010-01-01
Method and system for analyzing and processing information on one or more aircraft flight paths, using a four-dimensional coordinate system including three Cartesian or equivalent coordinates (x, y, z) and a fourth coordinate .delta. that corresponds to a distance estimated along a reference flight path to a nearest reference path location corresponding to a present location of the aircraft. Use of the coordinate .delta., rather than elapsed time t, avoids coupling of along-track error into aircraft altitude and reduces effects of errors on an aircraft landing site. Along-track, cross-track and/or altitude errors are estimated and compared with a permitted error bounding space surrounding the reference flight path.
Direct discretization of planar div-curl problems
NASA Technical Reports Server (NTRS)
Nicolaides, R. A.
1989-01-01
A control volume method is proposed for planar div-curl systems. The method is independent of potential and least squares formulations, and works directly with the div-curl system. The novelty of the technique lies in its use of a single local vector field component and two control volumes rather than the other way around. A discrete vector field theory comes quite naturally from this idea and is developed. Error estimates are proved for the method, and other ramifications investigated.
Backward-gazing method for heliostats shape errors measurement and calibration
NASA Astrophysics Data System (ADS)
Coquand, Mathieu; Caliot, Cyril; Hénault, François
2017-06-01
The pointing and canting accuracies and the surface shape of the heliostats have a great influence on the solar tower power plant efficiency. At the industrial scale, one of the issues to solve is the time and the efforts devoted to adjust the different mirrors of the faceted heliostats, which could take several months if the current methods were used. Accurate control of heliostat tracking requires complicated and onerous devices. Thus, methods used to adjust quickly the whole field of a plant are essential for the rise of solar tower technology with a huge number of heliostats. Wavefront detection is widely use in adaptive optics and shape error reconstruction. Such systems can be sources of inspiration for the measurement of solar facets misalignment and tracking errors. We propose a new method of heliostat characterization inspired by adaptive optics devices. This method aims at observing the brightness distributions on heliostat's surface, from different points of view close to the receiver of the power plant, in order to calculate the wavefront of the reflection of the sun on the concentrated surface to determine its errors. The originality of this new method is to use the profile of the sun to determine the defects of the mirrors. In addition, this method would be easy to set-up and could be implemented without sophisticated apparatus: only four cameras would be used to perform the acquisitions.
Uncalibrated Three-Dimensional Microrobot Control
2016-05-11
environment is essential. While many groups have already demonstrated the ability to control a microrobot in three dimensions through magnetically...or “uncalibrated”) controller which can dynamically adjust to changes in the operation environment is essential. While many groups have already...error squared and drive the microrobot to the desired position, [2]. The control signal is computed via a quasi -Newton method operating an
Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey
2015-01-01
Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.
Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
A novel body frame based approach to aerospacecraft attitude tracking.
Ma, Carlos; Chen, Michael Z Q; Lam, James; Cheung, Kie Chung
2017-09-01
In the common practice of designing an attitude tracker for an aerospacecraft, one transforms the Newton-Euler rotation equations to obtain the dynamic equations of some chosen inertial frame based attitude metrics, such as Euler angles and unit quaternions. A Lyapunov approach is then used to design a controller which ensures asymptotic convergence of the attitude to the desired orientation. Although this design methodology is pretty standard, it usually involves singularity-prone coordinate transformations which complicates the analysis process and controller design. A new, singularity free error feedback method is proposed in the paper to provide simple and intuitive stability analysis and controller synthesis. This new body frame based method utilizes the concept of Euleraxis and angles to generate the smallest error angles from a body frame perspective, without coordinate transformations. Global tracking convergence is illustrated with the use of a feedback linearizing PD tracker, a sliding mode controller, and a model reference adaptive controller. Experimental results are also obtained on a quadrotor platform with unknown system parameters and disturbances, using a boundary layer approximated sliding mode controller, a PIDD controller, and a unit sliding mode controller. Significant tracking quality is attained. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery
NASA Astrophysics Data System (ADS)
Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin
2018-04-01
ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.
Berwid, Olga G.; Halperin, Jeffrey M.; Johnson, Ray E.; Marks, David J.
2013-01-01
Background Attention-Deficit/Hyperactivity Disorder has been associated with deficits in self-regulatory cognitive processes, some of which are thought to lie at the heart of the disorder. Slowing of reaction times (RTs) for correct responses following errors made during decision tasks has been interpreted as an indication of intact self-regulatory functioning and has been shown to be attenuated in school-aged children with ADHD. This study attempted to examine whether ADHD symptoms are associated with an early-emerging deficit in post-error slowing. Method A computerized two-choice RT task was administered to an ethnically diverse sample of preschool-aged children classified as either ‘control’ (n = 120) or ‘hyperactive/inattentive’ (HI; n = 148) using parent- and teacher-rated ADHD symptoms. Analyses were conducted to determine whether HI preschoolers exhibit a deficit in this self-regulatory ability. Results HI children exhibited reduced post-error slowing relative to controls on the trials selected for analysis. Supplementary analyses indicated that this may have been due to a reduced proportion of trials following errors on which HI children slowed rather than to a reduction in the absolute magnitude of slowing on all trials following errors. Conclusions High levels of ADHD symptoms in preschoolers may be associated with a deficit in error processing as indicated by post-error slowing. The results of supplementary analyses suggest that this deficit is perhaps more a result of failures to perceive errors than of difficulties with executive control. PMID:23387525
About problematic peculiarities of Fault Tolerance digital regulation organization
NASA Astrophysics Data System (ADS)
Rakov, V. I.; Zakharova, O. V.
2018-05-01
The solution of problems concerning estimation of working capacity of regulation chains and possibilities of preventing situations of its violation in three directions are offered. The first direction is working out (creating) the methods of representing the regulation loop (circuit) by means of uniting (combining) diffuse components and forming algorithmic tooling for building predicates of serviceability assessment separately for the components and the for regulation loops (circuits, contours) in general. The second direction is creating methods of Fault Tolerance redundancy in the process of complex assessment of current values of control actions, closure errors and their regulated parameters. The third direction is creating methods of comparing the processes of alteration (change) of control actions, errors of closure and regulating parameters with their standard models or their surroundings. This direction allows one to develop methods and algorithmic tool means, aimed at preventing loss of serviceability and effectiveness of not only a separate digital regulator, but also the whole complex of Fault Tolerance regulation.
Data quality assurance and control in cognitive research: Lessons learned from the PREDICT-HD study.
Westervelt, Holly James; Bernier, Rachel A; Faust, Melanie; Gover, Mary; Bockholt, H Jeremy; Zschiegner, Roland; Long, Jeffrey D; Paulsen, Jane S
2017-09-01
We discuss the strategies employed in data quality control and quality assurance for the cognitive core of Neurobiological Predictors of Huntington's Disease (PREDICT-HD), a long-term observational study of over 1,000 participants with prodromal Huntington disease. In particular, we provide details regarding the training and continual evaluation of cognitive examiners, methods for error corrections, and strategies to minimize errors in the data. We present five important lessons learned to help other researchers avoid certain assumptions that could potentially lead to inaccuracies in their cognitive data. Copyright © 2017 John Wiley & Sons, Ltd.
Design considerations for case series models with exposure onset measurement error.
Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V
2013-02-28
The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Liu, Xing-fa; Cen, Ming
2007-12-01
Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.
14 CFR 35.23 - Propeller control system.
Code of Federal Regulations, 2013 CFR
2013-01-01
... between operating modes, performs the functions defined by the applicant throughout the declared operating... system imbedded software must be designed and implemented by a method approved by the Administrator that... software errors. (d) The propeller control system must be designed and constructed so that the failure or...
14 CFR 35.23 - Propeller control system.
Code of Federal Regulations, 2012 CFR
2012-01-01
... between operating modes, performs the functions defined by the applicant throughout the declared operating... system imbedded software must be designed and implemented by a method approved by the Administrator that... software errors. (d) The propeller control system must be designed and constructed so that the failure or...
14 CFR 35.23 - Propeller control system.
Code of Federal Regulations, 2014 CFR
2014-01-01
... between operating modes, performs the functions defined by the applicant throughout the declared operating... system imbedded software must be designed and implemented by a method approved by the Administrator that... software errors. (d) The propeller control system must be designed and constructed so that the failure or...
Improving the quality of marine geophysical track line data: Along-track analysis
NASA Astrophysics Data System (ADS)
Chandler, Michael T.; Wessel, Paul
2008-02-01
We have examined 4918 track line geophysics cruises archived at the U.S. National Geophysical Data Center (NGDC) using comprehensive error checking methods. Each cruise was checked for observation outliers, excessive gradients, metadata consistency, and general agreement with satellite altimetry-derived gravity and predicted bathymetry grids. Thresholds for error checking were determined empirically through inspection of histograms for all geophysical values, gradients, and differences with gridded data sampled along ship tracks. Robust regression was used to detect systematic scale and offset errors found by comparing ship bathymetry and free-air anomalies to the corresponding values from global grids. We found many recurring error types in the NGDC archive, including poor navigation, inappropriately scaled or offset data, excessive gradients, and extended offsets in depth and gravity when compared to global grids. While ˜5-10% of bathymetry and free-air gravity records fail our conservative tests, residual magnetic errors may exceed twice this proportion. These errors hinder the effective use of the data and may lead to mistakes in interpretation. To enable the removal of gross errors without over-writing original cruise data, we developed an errata system that concisely reports all errors encountered in a cruise. With such errata files, scientists may share cruise corrections, thereby preventing redundant processing. We have implemented these quality control methods in the modified MGD77 supplement to the Generic Mapping Tools software suite.
Catastrophic photometric redshift errors: Weak-lensing survey requirements
Bernstein, Gary; Huterer, Dragan
2010-01-11
We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number N spec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of N spec is ~10 6 we findmore » that using only the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in N spec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the z s – z p distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1983-01-01
New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.
Brown, Julie; Elkington, Jane; Hall, Alexandra; Keay, Lisa; Charlton, Judith L; Hunter, Kate; Koppel, Sjaan; Hayen, Andrew; Bilston, Lynne E
2018-03-07
With long-standing and widespread high rates of errors in child restraint use, there is a need to identify effective methods to address this problem. Information supplied with products at the point of sale may be a potentially efficient delivery point for such a countermeasure. The aim of this study is to establish whether product materials developed using a consumer-driven approach reduce errors in restraint use among purchasers of new child restraint systems. A cluster randomised controlled trial (cRCT) will be conducted. Retail stores (n=22) in the greater Sydney area will be randomised into intervention sites (n=11) and control sites (n=11), stratified by geographical and socioeconomic indicators. Participants (n=836) will enter the study on purchase of a restraint. Outcome measures are errors in installation of the restraint as observed by a trained researcher during a 6-month follow-up home assessment, and adjustment checks made by the parent when the child is placed into the restraint (observed using naturalistic methods). Process evaluation measures will also be collected during the home visit. An intention-to-treat approach will be used for all analyses. Correct use and adjustment checks made by the parent will be compared between control and intervention groups using a logistic regression model. The number of installation errors between groups will be compared using Poisson regression. This cRCT will determine the effectiveness of targeted, consumer-driven information on actual error rates in use of restraints. More broadly, it may provide a best practice model for developing safety product information. ACTRN12617001252303p; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Postural control model interpretation of stabilogram diffusion analysis
NASA Technical Reports Server (NTRS)
Peterka, R. J.
2000-01-01
Collins and De Luca [Collins JJ. De Luca CJ (1993) Exp Brain Res 95: 308-318] introduced a new method known as stabilogram diffusion analysis that provides a quantitative statistical measure of the apparently random variations of center-of-pressure (COP) trajectories recorded during quiet upright stance in humans. This analysis generates a stabilogram diffusion function (SDF) that summarizes the mean square COP displacement as a function of the time interval between COP comparisons. SDFs have a characteristic two-part form that suggests the presence of two different control regimes: a short-term open-loop control behavior and a longer-term closed-loop behavior. This paper demonstrates that a very simple closed-loop control model of upright stance can generate realistic SDFs. The model consists of an inverted pendulum body with torque applied at the ankle joint. This torque includes a random disturbance torque and a control torque. The control torque is a function of the deviation (error signal) between the desired upright body position and the actual body position, and is generated in proportion to the error signal, the derivative of the error signal, and the integral of the error signal [i.e. a proportional, integral and derivative (PID) neural controller]. The control torque is applied with a time delay representing conduction, processing, and muscle activation delays. Variations in the PID parameters and the time delay generate variations in SDFs that mimic real experimental SDFs. This model analysis allows one to interpret experimentally observed changes in SDFs in terms of variations in neural controller and time delay parameters rather than in terms of open-loop versus closed-loop behavior.
Error rates and resource overheads of encoded three-qubit gates
NASA Astrophysics Data System (ADS)
Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.
2017-10-01
A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.
Zhang, Xiaoying; Liu, Songhuai; Yang, Degang; Du, Liangjie; Wang, Ziyuan
2016-08-01
[Purpose] The purpose of this study was to examine the immediate effects of therapeutic keyboard music playing on the finger function of subjects' hands through measurements of the joint position error test, surface electromyography, probe reaction time, and writing time. [Subjects and Methods] Ten subjects were divided randomly into experimental and control groups. The experimental group used therapeutic keyboard music playing and the control group used grip training. All subjects were assessed and evaluated by the joint position error test, surface electromyography, probe reaction time, and writing time. [Results] After accomplishing therapeutic keyboard music playing and grip training, surface electromyography of the two groups showed no significant change, but joint position error test, probe reaction time, and writing time obviously improved. [Conclusion] These results suggest that therapeutic keyboard music playing is an effective and novel treatment for improving joint position error test scores, probe reaction time, and writing time, and it should be promoted widely in clinics.
Local position control: A new concept for control of manipulators
NASA Technical Reports Server (NTRS)
Kelly, Frederick A.
1988-01-01
Resolved motion rate control is currently one of the most frequently used methods of manipulator control. It is currently used in the Space Shuttle remote manipulator system (RMS) and in prosthetic devices. Position control is predominately used in locating the end-effector of an industrial manipulator along a path with prescribed timing. In industrial applications, resolved motion rate control is inappropriate since position error accumulates. This is due to velocity being the control variable. In some applications this property is an advantage rather than a disadvantage. It may be more important for motion to end as soon as the input command is removed rather than reduce the position error to zero. Local position control is a new concept for manipulator control which retains the important properties of resolved motion rate control, but reduces the drift. Local position control can be considered to be a generalization of resolved position and resolved rate control. It places both control schemes on a common mathematical basis.
Khanesar, Mojtaba Ahmadieh; Kayacan, Erdal; Reyhanoglu, Mahmut; Kaynak, Okyay
2015-04-01
A novel type-2 fuzzy membership function (MF) in the form of an ellipse has recently been proposed in literature, the parameters of which that represent uncertainties are de-coupled from its parameters that determine the center and the support. This property has enabled the proposers to make an analytical comparison of the noise rejection capabilities of type-1 fuzzy logic systems with its type-2 counterparts. In this paper, a sliding mode control theory-based learning algorithm is proposed for an interval type-2 fuzzy logic system which benefits from elliptic type-2 fuzzy MFs. The learning is based on the feedback error learning method and not only the stability of the learning is proved but also the stability of the overall system is shown by adding an additional component to the control scheme to ensure robustness. In order to test the efficiency and efficacy of the proposed learning and the control algorithm, the trajectory tracking problem of a magnetic rigid spacecraft is studied. The simulations results show that the proposed control algorithm gives better performance results in terms of a smaller steady state error and a faster transient response as compared to conventional control algorithms.
Albrecht, Bjoern; Brandeis, Daniel; Uebel, Henrik; Heinrich, Hartmut; Mueller, Ueli C.; Hasselhorn, Marcus; Steinhausen, Hans-Christoph; Rothenberger, Aribert; Banaschewski, Tobias
2008-01-01
Background Attention deficit/hyperactivity disorder is a very common and highly heritable child psychiatric disorder associated with dysfunctions in fronto-striatal networks that control attention and response organisation. Aim of this study was to investigate whether features of action monitoring related to dopaminergic functions represent endophenotypes which are brain functions on the pathway from genes and environmental risk factors to behaviour. Methods Action monitoring and error processing as indicated by behavioural and electrophysiological parameters during a flanker task were examined in boys with ADHD combined type according to DSM-IV (N=68), their nonaffected siblings (N=18) and healthy controls with no known family history of ADHD (N=22). Results Boys with ADHD displayed slower and more variable reaction-times. Error negativity (Ne) was smaller in boys with ADHD compared to healthy controls, while nonaffected siblings displayed intermediate amplitudes following a linear model predicted by genetic concordance. The three groups did not differ on error positivity (Pe). N2 amplitude enhancement due to conflict (incongruent flankers) was reduced in the ADHD group. Nonaffected siblings also displayed intermediate N2 enhancement. Conclusions Converging evidence from behavioural and ERP findings suggests that action monitoring and initial error processing, both related to dopaminergically modulated functions of anterior cingulate cortex, might be an endophenotype related to ADHD. PMID:18339358
Mesh refinement strategy for optimal control problems
NASA Astrophysics Data System (ADS)
Paiva, L. T.; Fontes, F. A. C. C.
2013-10-01
Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.
NASA Astrophysics Data System (ADS)
Peckerar, Martin C.; Marrian, Christie R.
1995-05-01
Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.
NASA Astrophysics Data System (ADS)
Ikeura, Takuro; Nozaki, Takayuki; Shiota, Yoichi; Yamamoto, Tatsuya; Imamura, Hiroshi; Kubota, Hitoshi; Fukushima, Akio; Suzuki, Yoshishige; Yuasa, Shinji
2018-04-01
Using macro-spin modeling, we studied the reduction in the write error rate (WER) of voltage-induced dynamic magnetization switching by enhancing the effective thermal stability of the free layer using a voltage-controlled magnetic anisotropy change. Marked reductions in WER can be achieved by introducing reverse bias voltage pulses both before and after the write pulse. This procedure suppresses the thermal fluctuations of magnetization in the initial and final states. The proposed reverse bias method can offer a new way of improving the writing stability of voltage-driven spintronic devices.
A new measurement method of profile tolerance for the LAMOST focal plane
NASA Astrophysics Data System (ADS)
Zhou, Zengxiang; Jin, Yi; Zhai, Chao; Xing, Xiaozheng
2008-07-01
There were a few methods taken in the profile tolerance measurement of the LAMOST Focal Plane Plate. One of the methods was to use CMM (Coordinate Measurement Machine) to measure the points on the small Focal Plane Plate and calculate the points whether or not in the tolerance zone. In this process there are some small shortcomings. The measuring point positions on the Focal Plane Plate are not the actual installation location of the optical fiber positioning system. In order to eliminate these principle errors, a measuring mandrel is inserted into the unit-holes, and the precision for the mandrel with the hole is controlled in the high level. Then measure the center of the precise target ball which is placed on the measuring mandrel by CMM. At last, fit a sphere surface with the measuring center points of the target ball and analyze the profile tolerance of the Focal Plane Plate. This process will be more in line with the actual installation location of the optical fiber positioning system. When use this method to judge the profile tolerance can provide the reference date for maintaining the ultra error unit-holes on the Focal Plane Plate. But when insert the measuring mandrel into the unit hole, there are manufacturing errors in the measuring mandrel, target ball and assembly errors. All these errors will bring the influence in the measurement. In the paper, an impact evaluation assesses the intermediate process with all these errors through experiments. And the experiment results show that there are little influence when use the target ball and the measuring mandrel in the measurement of the profile tolerance. Instead, there are more advantages than many past use of measuring methods.
Lobach, Iryna; Mallick, Bani; Carroll, Raymond J
2011-01-01
Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1978-01-01
Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.
Comparative evaluation of user interfaces for robot-assisted laser phonomicrosurgery.
Dagnino, Giulio; Mattos, Leonardo S; Becattini, Gabriele; Dellepiane, Massimo; Caldwell, Darwin G
2011-01-01
This research investigates the impact of three different control devices and two visualization methods on the precision, safety and ergonomics of a new medical robotic system prototype for assistive laser phonomicrosurgery. This system allows the user to remotely control the surgical laser beam using either a flight simulator type joystick, a joypad, or a pen display system in order to improve the traditional surgical setup composed by a mechanical micromanipulator coupled with a surgical microscope. The experimental setup and protocol followed to obtain quantitative performance data from the control devices tested are fully described here. This includes sets of path following evaluation experiments conducted with ten subjects with different skills, for a total of 700 trials. The data analysis method and experimental results are also presented, demonstrating an average 45% error reduction when using the joypad and up to 60% error reduction when using the pen display system versus the standard phonomicrosurgery setup. These results demonstrate the new system can provide important improvements in terms of surgical precision, ergonomics and safety. In addition, the evaluation method presented here is shown to support an objective selection of control devices for this application.
Blue, Elizabeth Marchani; Sun, Lei; Tintle, Nathan L.; Wijsman, Ellen M.
2014-01-01
When analyzing family data, we dream of perfectly informative data, even whole genome sequences (WGS) for all family members. Reality intervenes, and we find next-generation sequence (NGS) data have error, and are often too expensive or impossible to collect on everyone. Genetic Analysis Workshop 18 groups “Quality Control” and “Dropping WGS through families using GWAS framework” focused on finding, correcting, and using errors within the available sequence and family data, developing methods to infer and analyze missing sequence data among relatives, and testing for linkage and association with simulated blood pressure. We found that single nucleotide polymorphisms, NGS, and imputed data are generally concordant, but that errors are particularly likely at rare variants, homozygous genotypes, within regions with repeated sequences or structural variants, and within sequence data imputed from unrelateds. Admixture complicated identification of cryptic relatedness, but information from Mendelian transmission improved error detection and provided an estimate of the de novo mutation rate. Both genotype and pedigree errors had an adverse effect on subsequent analyses. Computationally fast rules-based imputation was accurate, but could not cover as many loci or subjects as more computationally demanding probability-based methods. Incorporating population-level data into pedigree-based imputation methods improved results. Observed data outperformed imputed data in association testing, but imputed data were also useful. We discuss the strengths and weaknesses of existing methods, and suggest possible future directions. Topics include improving communication between those performing data collection and analysis, establishing thresholds for and improving imputation quality, and incorporating error into imputation and analytical models. PMID:25112184
Why Does a Method That Fails Continue To Be Used: The Answer
Templeton, Alan R.
2009-01-01
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340
Assessment of the Derivative-Moment Transformation method for unsteady-load estimation
NASA Astrophysics Data System (ADS)
Mohebbian, Ali; Rival, David
2011-11-01
It is often difficult, if not impossible, to measure the aerodynamic or hydrodynamic forces on a moving body. For this reason, a classical control-volume technique is typically applied to extract the unsteady forces instead. However, measuring the acceleration term within the volume of interest using PIV can be limited by optical access, reflections as well as shadows. Therefore in this study an alternative approach, termed the Derivative-Moment Transformation (DMT) method, is introduced and tested on a synthetic data set produced using numerical simulations. The test case involves the unsteady loading of a flat plate in a two-dimensional, laminar periodic gust. The results suggest that the DMT method can accurately predict the acceleration term so long as appropriate spatial and temporal resolutions are maintained. The major deficiency was found to be the determination of pressure in the wake. The effect of control-volume size was investigated suggesting that smaller domains work best by minimizing the associated error with the pressure field. When increasing the control-volume size, the number of calculations necessary for the pressure-gradient integration increases, in turn substantially increasing the error propagation.
A data-driven modeling approach to stochastic computation for low-energy biomedical devices.
Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen
2011-01-01
Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.
Error mechanism analyses of an ultra-precision stage for high speed scan motion over a large stroke
NASA Astrophysics Data System (ADS)
Wang, Shaokai; Tan, Jiubin; Cui, Jiwen
2015-02-01
Reticle Stage (RS) is designed to complete scan motion with high speed in nanometer-scale over a large stroke. Comparing with the allowable scan accuracy of a few nanometers, errors caused by any internal or external disturbances are critical and must not be ignored. In this paper, RS is firstly introduced in aspects of mechanical structure, forms of motion, and controlling method. Based on that, mechanisms of disturbances transferred to final servo-related error in scan direction are analyzed, including feedforward error, coupling between the large stroke stage (LS) and the short stroke stage (SS), and movement of measurement reference. Especially, different forms of coupling between SS and LS are discussed in detail. After theoretical analysis above, the contributions of these disturbances to final error are simulated numerically. The residual positioning error caused by feedforward error in acceleration process is about 2 nm after settling time, the coupling between SS and LS about 2.19 nm, and the movements of MF about 0.6 nm.
On parameters identification of computational models of vibrations during quiet standing of humans
NASA Astrophysics Data System (ADS)
Barauskas, R.; Krušinskienė, R.
2007-12-01
Vibration of the center of pressure (COP) of human body on the base of support during quiet standing is a very popular clinical research, which provides useful information about the physical and health condition of an individual. In this work, vibrations of COP of a human body in forward-backward direction during still standing are generated using controlled inverted pendulum (CIP) model with a single degree of freedom (dof) supplied with proportional, integral and differential (PID) controller, which represents the behavior of the central neural system of a human body and excited by cumulative disturbance vibration, generated within the body due to breathing or any other physical condition. The identification of the model and disturbance parameters is an important stage while creating a close-to-reality computational model able to evaluate features of disturbance. The aim of this study is to present the CIP model parameters identification approach based on the information captured by time series of the COP signal. The identification procedure is based on an error function minimization. Error function is formulated in terms of time laws of computed and experimentally measured COP vibrations. As an alternative, error function is formulated in terms of the stabilogram diffusion function (SDF). The minimization of error functions is carried out by employing methods based on sensitivity functions of the error with respect to model and excitation parameters. The sensitivity functions are obtained by using the variational techniques. The inverse dynamic problem approach has been employed in order to establish the properties of the disturbance time laws ensuring the satisfactory coincidence of measured and computed COP vibration laws. The main difficulty of the investigated problem is encountered during the model validation stage. Generally, neither the PID controller parameter set nor the disturbance time law are known in advance. In this work, an error function formulated in terms of time derivative of disturbance torque has been proposed in order to obtain PID controller parameters, as well as the reference time law of the disturbance. The disturbance torque is calculated from experimental data using the inverse dynamic approach. Experiments presented in this study revealed that vibrations of disturbance torque and PID controller parameters identified by the method may be qualified as feasible in humans. Presented approach may be easily extended to structural models with any number of dof or higher structural complexity.
Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics
Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven
2011-01-01
Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957
NASA Technical Reports Server (NTRS)
Lukyanov, S. S.
1983-01-01
This paper is dedicated to the possible investigation of the utilization of the solar radiation pressure for the spacecraft motion control in the vicinity of collinear libration point of planar restricted ring problem of three bodies. The control is realized by changing the solar sail area at its permanent orientation. In this problem the influence of the trajectory errors and the errors of the execution control is accounted. It is worked out, the estimation method of the solar sail sizes, which are necessary for spacecraft keeping in the vicinity of collinear libration point during the certain time with given probability. The main control parameters were calculated for some examples in case of libration points of the Sun-Earth and Earth-Moon systems.
Variable Structure PID Control to Prevent Integrator Windup
NASA Technical Reports Server (NTRS)
Hall, C. E.; Hodel, A. S.; Hung, J. Y.
1999-01-01
PID controllers are frequently used to control systems requiring zero steady-state error while maintaining requirements for settling time and robustness (gain/phase margins). PID controllers suffer significant loss of performance due to short-term integrator wind-up when used in systems with actuator saturation. We examine several existing and proposed methods for the prevention of integrator wind-up in both continuous and discrete time implementations.
Dual-loop model of the human controller
NASA Technical Reports Server (NTRS)
Hess, R. A.
1978-01-01
A dual-loop model of the human controller in single-axis compensatory tracking tasks is introduced. This model possesses an inner-loop closure that involves feeding back that portion of controlled element output rate that is due to control activity. A novel feature of the model is the explicit appearance of the human's internal representation of the manipulator-controlled element dynamics in the inner loop. The sensor inputs to the human controller are assumed to be system error and control force. The former can be sensed via visual, aural, or tactile displays, whereas the latter is assumed to be sensed in kinesthetic fashion. A set of general adaptive characteristics for the model is hypothesized, including a method for selecting simplified internal models of the manipulator-controlled element dynamics. It is demonstrated that the model can produce controller describing functions that closely approximate those measured in four laboratory tracking tasks in which the controlled element dynamics vary considerably in terms of ease of control. An empirically derived expression for the normalized injected error remnant spectrum is introduced.
Analysis of tractable distortion metrics for EEG compression applications.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-07-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.
Stable modeling based control methods using a new RBF network.
Beyhan, Selami; Alci, Musa
2010-10-01
This paper presents a novel model with radial basis functions (RBFs), which is applied successively for online stable identification and control of nonlinear discrete-time systems. First, the proposed model is utilized for direct inverse modeling of the plant to generate the control input where it is assumed that inverse plant dynamics exist. Second, it is employed for system identification to generate a sliding-mode control input. Finally, the network is employed to tune PID (proportional + integrative + derivative) controller parameters automatically. The adaptive learning rate (ALR), which is employed in the gradient descent (GD) method, provides the global convergence of the modeling errors. Using the Lyapunov stability approach, the boundedness of the tracking errors and the system parameters are shown both theoretically and in real time. To show the superiority of the new model with RBFs, its tracking results are compared with the results of a conventional sigmoidal multi-layer perceptron (MLP) neural network and the new model with sigmoid activation functions. To see the real-time capability of the new model, the proposed network is employed for online identification and control of a cascaded parallel two-tank liquid-level system. Even though there exist large disturbances, the proposed model with RBFs generates a suitable control input to track the reference signal better than other methods in both simulations and real time. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Inverse solutions for electrical impedance tomography based on conjugate gradients methods
NASA Astrophysics Data System (ADS)
Wang, M.
2002-01-01
A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.
Power/Sample Size Calculations for Assessing Correlates of Risk in Clinical Efficacy Trials
Gilbert, Peter B.; Janes, Holly E.; Huang, Yunda
2016-01-01
In a randomized controlled clinical trial that assesses treatment efficacy, a common objective is to assess the association of a measured biomarker response endpoint with the primary study endpoint in the active treatment group, using a case-cohort, case-control, or two-phase sampling design. Methods for power and sample size calculations for such biomarker association analyses typically do not account for the level of treatment efficacy, precluding interpretation of the biomarker association results in terms of biomarker effect modification of treatment efficacy, with detriment that the power calculations may tacitly and inadvertently assume that the treatment harms some study participants. We develop power and sample size methods accounting for this issue, and the methods also account for inter-individual variability of the biomarker that is not biologically relevant (e.g., due to technical measurement error). We focus on a binary study endpoint and on a biomarker subject to measurement error that is normally distributed or categorical with two or three levels. We illustrate the methods with preventive HIV vaccine efficacy trials, and include an R package implementing the methods. PMID:27037797
Multi-template tensor-based morphometry: Application to analysis of Alzheimer's disease
Koikkalainen, Juha; Lötjönen, Jyrki; Thurfjell, Lennart; Rueckert, Daniel; Waldemar, Gunhild; Soininen, Hilkka
2012-01-01
In this paper methods for using multiple templates in tensor-based morphometry (TBM) are presented and comparedtothe conventional single-template approach. TBM analysis requires non-rigid registrations which are often subject to registration errors. When using multiple templates and, therefore, multiple registrations, it can be assumed that the registration errors are averaged and eventually compensated. Four different methods are proposed for multi-template TBM. The methods were evaluated using magnetic resonance (MR) images of healthy controls, patients with stable or progressive mild cognitive impairment (MCI), and patients with Alzheimer's disease (AD) from the ADNI database (N=772). The performance of TBM features in classifying images was evaluated both quantitatively and qualitatively. Classification results show that the multi-template methods are statistically significantly better than the single-template method. The overall classification accuracy was 86.0% for the classification of control and AD subjects, and 72.1%for the classification of stable and progressive MCI subjects. The statistical group-level difference maps produced using multi-template TBM were smoother, formed larger continuous regions, and had larger t-values than the maps obtained with single-template TBM. PMID:21419228
Efficient Variational Quantum Simulator Incorporating Active Error Minimization
NASA Astrophysics Data System (ADS)
Li, Ying; Benjamin, Simon C.
2017-04-01
One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods – the cluster size statistic (CSS) and cluster mass statistic (CMS) – are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity. PMID:24906136
A study of attitude control concepts for precision-pointing non-rigid spacecraft
NASA Technical Reports Server (NTRS)
Likins, P. W.
1975-01-01
Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.
Quantifying errors in trace species transport modeling.
Prather, Michael J; Zhu, Xin; Strahan, Susan E; Steenrod, Stephen D; Rodriguez, Jose M
2008-12-16
One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO(2) using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error.
Rakkiyappan, R; Sakthivel, N; Cao, Jinde
2015-06-01
This study examines the exponential synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Additionally, sampled-data controller with time-varying sampling period is considered and is assumed to switch between m different values in a random way with given probability. Then, a novel Lyapunov-Krasovskii functional (LKF) with triple integral terms is constructed and by using Jensen's inequality and reciprocally convex approach, sufficient conditions under which the dynamical network is exponentially mean-square stable are derived. When applying Jensen's inequality to partition double integral terms in the derivation of linear matrix inequality (LMI) conditions, a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters appears. In order to handle such a combination, an effective method is introduced by extending the lower bound lemma. To design the sampled-data controller, the synchronization error system is represented as a switched system. Based on the derived LMI conditions and average dwell-time method, sufficient conditions for the synchronization of switched error system are derived in terms of LMIs. Finally, numerical example is employed to show the effectiveness of the proposed methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Wenzeng; Chen, Nian; Wang, Bin; Cao, Yipeng
2005-01-01
Rocket engine is a hard-core part of aerospace transportation and thrusting system, whose research and development is very important in national defense, aviation and aerospace. A novel vision sensor is developed, which can be used for error detecting in arc length control and seam tracking in precise pulse TIG welding of the extending part of the rocket engine jet tube. The vision sensor has many advantages, such as imaging with high quality, compactness and multiple functions. The optics design, mechanism design and circuit design of the vision sensor have been described in detail. Utilizing the mirror imaging of Tungsten electrode in the weld pool, a novel method is proposed to detect the arc length and seam tracking error of Tungsten electrode to the center line of joint seam from a single weld image. A calculating model of the method is proposed according to the relation of the Tungsten electrode, weld pool, the mirror of Tungsten electrode in weld pool and joint seam. The new methodologies are given to detect the arc length and seam tracking error. Through analyzing the results of the experiments, a system error modifying method based on a linear function is developed to improve the detecting precise of arc length and seam tracking error. Experimental results show that the final precision of the system reaches 0.1 mm in detecting the arc length and the seam tracking error of Tungsten electrode to the center line of joint seam.
Gradient ascent pulse engineering approach to CNOT gates in donor electron spin quantum computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, D.-B.; Goan, H.-S.
2008-11-07
In this paper, we demonstrate how gradient ascent pulse engineering (GRAPE) optimal control methods can be implemented on donor electron spin qubits in semiconductors with an architecture complementary to the original Kane's proposal. We focus on the high fidelity controlled-NOT (CNOT) gate and we explicitly find the digitized control sequences for a controlled-NOT gate by optimizing its fidelity using the effective, reduced donor electron spin Hamiltonian with external controls over the hyperfine A and exchange J interactions. We then simulate the CNOT-gate sequence with the full spin Hamiltonian and find that it has an error of 10{sup -6} that ismore » below the error threshold of 10{sup -4} required for fault-tolerant quantum computation. Also the CNOT gate operation time of 100 ns is 3 times faster than 297 ns of the proposed global control scheme.« less
NASA Technical Reports Server (NTRS)
Vandervelde, W. E.; Carignan, C. R.
1982-01-01
The degree of controllability of a large space structure is found by a four step procedure: (1) finding the minimum control energy for driving the system from a given initial state to the origin in the prescribed time; (2) finding the region of initial state which can be driven to the origin with constrained control energy and time using optimal control strategy; (3) scaling the axes so that a unit displacement in every direction is equally important to control; and (4) finding the linear measurement of the weighted "volume" of the ellipsoid in the equicontrol space. For observability, the error covariance must be reduced toward zero using measurements optimally, and the criterion must be standardized by the magnitude of tolerable errors. The results obtained using these methods are applied to the vibration modes of a free-free beam.
Wang, Fei-Yue; Jin, Ning; Liu, Derong; Wei, Qinglai
2011-01-01
In this paper, we study the finite-horizon optimal control problem for discrete-time nonlinear systems using the adaptive dynamic programming (ADP) approach. The idea is to use an iterative ADP algorithm to obtain the optimal control law which makes the performance index function close to the greatest lower bound of all performance indices within an ε-error bound. The optimal number of control steps can also be obtained by the proposed ADP algorithms. A convergence analysis of the proposed ADP algorithms in terms of performance index function and control policy is made. In order to facilitate the implementation of the iterative ADP algorithms, neural networks are used for approximating the performance index function, computing the optimal control policy, and modeling the nonlinear system. Finally, two simulation examples are employed to illustrate the applicability of the proposed method.
Panunzio, Michele F.; Antoniciello, Antonietta; Pisano, Alessandra; Rosa, Giovanna
2007-01-01
With respect to food safety, many works have studied the effectiveness of self-monitoring plans of food companies, designed using the Hazard Analysis and Critical Control Point (HACCP) method. On the other hand, in-depth research has not been made concerning the adherence of the plans to HACCP standards. During our research, we evaluated 116 self-monitoring plans adopted by food companies located in the territory of the Local Health Authority (LHA) of Foggia, Italy. The general errors (terminology, philosophy and redundancy) and the specific errors (transversal plan, critical limits, hazard specificity, and lack of procedures) were standardized. Concerning the general errors, terminological errors pertain to half the plans examined, 47% include superfluous elements and 60% have repetitive subjects. With regards to the specific errors, 77% of the plans examined contained specific errors. The evaluation has pointed out the lack of comprehension of the HACCP system by the food companies and has allowed the Servizio di Igiene degli Alimenti e della Nutrizione (Food and Nutrition Health Service), in its capacity as a control body, to intervene with the companies in order to improve designing HACCP plans. PMID:17911662
NASA Astrophysics Data System (ADS)
Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.
2016-08-01
According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.
Tests of Independence for Ordinal Data Using Bootstrap.
ERIC Educational Resources Information Center
Chan, Wai; Yung, Yiu-Fai; Bentler, Peter M.; Tang, Man-Lai
1998-01-01
Two bootstrap tests are proposed to test the independence hypothesis in a two-way cross table. Monte Carlo studies are used to compare the traditional asymptotic test with these bootstrap methods, and the bootstrap methods are found superior in two ways: control of Type I error and statistical power. (SLD)
First Language Composition Pedagogy in the Second Language Classroom: A Reassesment.
ERIC Educational Resources Information Center
Ross, Steven; And Others
1988-01-01
Evaluated the effectiveness of using native language (Japanese) based writing methods in English as a second language (ESL) classrooms. The methods compared included sentence combining and structural grammar instruction with journal writing, controlled composition writing with feedback on surface error, and peer reformulation. Journal writing, but…
Lenderink, Albert W.; Widdershoven, Jos W. M. G.; van den Bemt, Patricia M. L. A.
2010-01-01
Objective Heart failure patients are regularly admitted to hospital and frequently use multiple medication. Besides intentional changes in pharmacotherapy, unintentional changes may occur during hospitalisation. The aim of this study was to investigate the effect of a clinical pharmacist discharge service on medication discrepancies and prescription errors in patients with heart failure. Setting A general teaching hospital in Tilburg, the Netherlands. Method An open randomized intervention study was performed comparing an intervention group, with a control group receiving regular care by doctors and nurses. The clinical pharmacist discharge service consisted of review of discharge medication, communicating prescribing errors with the cardiologist, giving patients information, preparation of a written overview of the discharge medication and communication to both the community pharmacist and the general practitioner about this medication. Within 6 weeks after discharge all patients were routinely scheduled to visit the outpatient clinic and medication discrepancies were measured. Main outcome measure The primary endpoint was the frequency of prescription errors in the discharge medication and medication discrepancies after discharge combined. Results Forty-four patients were included in the control group and 41 in the intervention group. Sixty-eight percent of patients in the control group had at least one discrepancy or prescription error against 39% in the intervention group (RR 0.57 (95% CI 0.37–0.88)). The percentage of medications with a discrepancy or prescription error in the control group was 14.6% and in the intervention group it was 6.1% (RR 0.42 (95% CI 0.27–0.66)). Conclusion This clinical pharmacist discharge service significantly reduces the risk of discrepancies and prescription errors in medication of patients with heart failure in the 1st month after discharge. PMID:20809276
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.
Knight, Stacey; Camp, Nicola J
2011-04-01
Current common wisdom posits that association analyses using family-based designs have inflated type 1 error rates (if relationships are ignored) and independent controls are more powerful than familial controls. We explore these suppositions. We show theoretically that family-based designs can have deflated type-error rates. Through simulation, we examine the validity and power of family designs for several scenarios: cases from randomly or selectively ascertained pedigrees; and familial or independent controls. Family structures considered are as follows: sibships, nuclear families, moderate-sized and extended pedigrees. Three methods were considered with the χ(2) test for trend: variance correction (VC), weighted (weights assigned to account for genetic similarity), and naïve (ignoring relatedness) as well as the Modified Quasi-likelihood Score (MQLS) test. Selectively ascertained pedigrees had similar levels of disease enrichment; random ascertainment had no such restriction. Data for 1,000 cases and 1,000 controls were created under the null and alternate models. The VC and MQLS methods were always valid. The naïve method was anti-conservative if independent controls were used and valid or conservative in designs with familial controls. The weighted association method was generally valid for independent controls, and was conservative for familial controls. With regard to power, independent controls were more powerful for small-to-moderate selectively ascertained pedigrees, but familial and independent controls were equivalent in the extended pedigrees and familial controls were consistently more powerful for all randomly ascertained pedigrees. These results suggest a more complex situation than previously assumed, which has important implications for study design and analysis. © 2011 Wiley-Liss, Inc.
Zhang, Bitao; Pi, YouGuo
2013-07-01
The traditional integer order proportional-integral-differential (IO-PID) controller is sensitive to the parameter variation or/and external load disturbance of permanent magnet synchronous motor (PMSM). And the fractional order proportional-integral-differential (FO-PID) control scheme based on robustness tuning method is proposed to enhance the robustness. But the robustness focuses on the open-loop gain variation of controlled plant. In this paper, an enhanced robust fractional order proportional-plus-integral (ERFOPI) controller based on neural network is proposed. The control law of the ERFOPI controller is acted on a fractional order implement function (FOIF) of tracking error but not tracking error directly, which, according to theory analysis, can enhance the robust performance of system. Tuning rules and approaches, based on phase margin, crossover frequency specification and robustness rejecting gain variation, are introduced to obtain the parameters of ERFOPI controller. And the neural network algorithm is used to adjust the parameter of FOIF. Simulation and experimental results show that the method proposed in this paper not only achieve favorable tracking performance, but also is robust with regard to external load disturbance and parameter variation. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
MRAC Revisited: Guaranteed Performance with Reference Model Modification
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmaje
2010-01-01
This paper presents modification of the conventional model reference adaptive control (MRAC) architecture in order to achieve guaranteed transient performance both in the output and input signals of an uncertain system. The proposed modification is based on the tracking error feedback to the reference model. It is shown that approach guarantees tracking of a given command and the ideal control signal (one that would be designed if the system were known) not only asymptotically but also in transient by a proper selection of the error feedback gain. The method prevents generation of high frequency oscillations that are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference command of any magnitude form any initial position without re-tuning. The benefits of the method are demonstrated in simulations.
New decoding methods of interleaved burst error-correcting codes
NASA Astrophysics Data System (ADS)
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Mismeasurement and the resonance of strong confounders: correlated errors.
Marshall, J R; Hastrup, J L; Ross, J S
1999-07-01
Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.
Assessment of the derivative-moment transformation method for unsteady-load estimation
NASA Astrophysics Data System (ADS)
Mohebbian, Ali; Rival, David E.
2012-08-01
It is often difficult, if not impossible, to measure the aerodynamic or hydrodynamic forces on a moving body. For this reason, a classical control-volume technique is typically applied to extract the unsteady forces. However, measuring the acceleration term within the volume of interest using particle image velocimetry (PIV) can be limited by optical access, reflections, as well as shadows. Therefore, in this study, an alternative approach, termed the derivative-moment transformation (DMT) method, is introduced and tested on a synthetic data set produced using numerical simulations. The test case involves the unsteady loading of a flat plate in a two-dimensional, laminar periodic gust. The results suggest that the DMT method can accurately predict the acceleration term so long as appropriate spatial and temporal resolutions are maintained. The major deficiency, which is more dominant for the direction of drag, was found to be the determination of pressure and unsteady terms in the wake. The effect of control-volume size was investigated, suggesting that larger domains work best by minimizing the associated error in the determination of the pressure field. When decreasing the control-volume size, wake vortices, which produce high gradients across the control surfaces, are found to substantially increase the level of error. On the other hand, it was shown that for large control volumes, and with realistic spatial resolution, the accuracy of the DMT method would also suffer. Therefore, a delicate compromise is required when selecting control-volume size in future experiments.
NASA Astrophysics Data System (ADS)
Paul, Prakash
2009-12-01
The finite element method (FEM) is used to solve three-dimensional electromagnetic scattering and radiation problems. Finite element (FE) solutions of this kind contain two main types of error: discretization error and boundary error. Discretization error depends on the number of free parameters used to model the problem, and on how effectively these parameters are distributed throughout the problem space. To reduce the discretization error, the polynomial order of the finite elements is increased, either uniformly over the problem domain or selectively in those areas with the poorest solution quality. Boundary error arises from the condition applied to the boundary that is used to truncate the computational domain. To reduce the boundary error, an iterative absorbing boundary condition (IABC) is implemented. The IABC starts with an inexpensive boundary condition and gradually improves the quality of the boundary condition as the iteration continues. An automatic error control (AEC) is implemented to balance the two types of error. With the AEC, the boundary condition is improved when the discretization error has fallen to a low enough level to make this worth doing. The AEC has these characteristics: (i) it uses a very inexpensive truncation method initially; (ii) it allows the truncation boundary to be very close to the scatterer/radiator; (iii) it puts more computational effort on the parts of the problem domain where it is most needed; and (iv) it can provide as accurate a solution as needed depending on the computational price one is willing to pay. To further reduce the computational cost, disjoint scatterers and radiators that are relatively far from each other are bounded separately and solved using a multi-region method (MRM), which leads to savings in computational cost. A simple analytical way to decide whether the MRM or the single region method will be computationally cheaper is also described. To validate the accuracy and savings in computation time, different shaped metallic and dielectric obstacles (spheres, ogives, cube, flat plate, multi-layer slab etc.) are used for the scattering problems. For the radiation problems, waveguide excited antennas (horn antenna, waveguide with flange, microstrip patch antenna) are used. Using the AEC the peak reduction in computation time during the iteration is typically a factor of 2, compared to the IABC using the same element orders throughout. In some cases, it can be as high as a factor of 4.
Improving laboratory data entry quality using Six Sigma.
Elbireer, Ali; Le Chasseur, Julie; Jackson, Brooks
2013-01-01
The Uganda Makerere University provides clinical laboratory support to over 70 clients in Uganda. With increased volume, manual data entry errors have steadily increased, prompting laboratory managers to employ the Six Sigma method to evaluate and reduce their problems. The purpose of this paper is to describe how laboratory data entry quality was improved by using Six Sigma. The Six Sigma Quality Improvement (QI) project team followed a sequence of steps, starting with defining project goals, measuring data entry errors to assess current performance, analyzing data and determining data-entry error root causes. Finally the team implemented changes and control measures to address the root causes and to maintain improvements. Establishing the Six Sigma project required considerable resources and maintaining the gains requires additional personnel time and dedicated resources. After initiating the Six Sigma project, there was a 60.5 percent reduction in data entry errors from 423 errors a month (i.e. 4.34 Six Sigma) in the first month, down to an average 166 errors/month (i.e. 4.65 Six Sigma) over 12 months. The team estimated the average cost of identifying and fixing a data entry error to be $16.25 per error. Thus, reducing errors by an average of 257 errors per month over one year has saved the laboratory an estimated $50,115 a year. The Six Sigma QI project provides a replicable framework for Ugandan laboratory staff and other resource-limited organizations to promote quality environment. Laboratory staff can deliver excellent care at a lower cost, by applying QI principles. This innovative QI method of reducing data entry errors in medical laboratories may improve the clinical workflow processes and make cost savings across the health care continuum.
NASA Astrophysics Data System (ADS)
Lin, Kyaw Kyaw; Soe, Aung Kyaw; Thu, Theint Theint
2008-10-01
This research work investigates a Self-Tuning Proportional Derivative (PD) type Fuzzy Logic Controller (STPDFLC) for a two link robot system. The proposed scheme adjusts on-line the output Scaling Factor (SF) by fuzzy rules according to the current trend of the robot. The rule base for tuning the output scaling factor is defined on the error (e) and change in error (de). The scheme is also based on the fact that the controller always tries to manipulate the process input. The rules are in the familiar if-then format. All membership functions for controller inputs (e and de) and controller output (UN) are defined on the common interval [-1,1]; whereas the membership functions for the gain updating factor (α) is defined on [0,1]. There are various methods to calculate the crisp output of the system. Center of Gravity (COG) method is used in this application due to better results it gives. Performances of the proposed STPDFLC are compared with those of their corresponding PD-type conventional Fuzzy Logic Controller (PDFLC). The proposed scheme shows a remarkably improved performance over its conventional counterpart especially under parameters variation (payload). The two-link results of analysis are simulated. These simulation results are illustrated by using MATLAB® programming.
NASA Astrophysics Data System (ADS)
Huber, Matthew S.; Ferriãre, Ludovic; Losiak, Anna; Koeberl, Christian
2011-09-01
Abstract- Planar deformation features (PDFs) in quartz, one of the most commonly used diagnostic indicators of shock metamorphism, are planes of amorphous material that follow crystallographic orientations, and can thus be distinguished from non-shock-induced fractures in quartz. The process of indexing data for PDFs from universal-stage measurements has traditionally been performed using a manual graphical method, a time-consuming process in which errors can easily be introduced. A mathematical method and computer algorithm, which we call the Automated Numerical Index Executor (ANIE) program for indexing PDFs, was produced, and is presented here. The ANIE program is more accurate and faster than the manual graphical determination of Miller-Bravais indices, as it allows control of the exact error used in the calculation and removal of human error from the process.
An adaptive reentry guidance method considering the influence of blackout zone
NASA Astrophysics Data System (ADS)
Wu, Yu; Yao, Jianyao; Qu, Xiangju
2018-01-01
Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.
NASA Astrophysics Data System (ADS)
Lawrence, Kurt C.; Park, Bosoon; Windham, William R.; Mao, Chengye; Poole, Gavin H.
2003-03-01
A method to calibrate a pushbroom hyperspectral imaging system for "near-field" applications in agricultural and food safety has been demonstrated. The method consists of a modified geometric control point correction applied to a focal plane array to remove smile and keystone distortion from the system. Once a FPA correction was applied, single wavelength and distance calibrations were used to describe all points on the FPA. Finally, a percent reflectance calibration, applied on a pixel-by-pixel basis, was used for accurate measurements for the hyperspectral imaging system. The method was demonstrated with a stationary prism-grating-prism, pushbroom hyperspectral imaging system. For the system described, wavelength and distance calibrations were used to reduce the wavelength errors to <0.5 nm and distance errors to <0.01mm (across the entrance slit width). The pixel-by-pixel percent reflectance calibration, which was performed at all wavelengths with dark current and 99% reflectance calibration-panel measurements, was verified with measurements on a certified gradient Spectralon panel with values ranging from about 14% reflectance to 99% reflectance with errors generally less than 5% at the mid-wavelength measurements. Results from the calibration method, indicate the hyperspectral imaging system has a usable range between 420 nm and 840 nm. Outside this range, errors increase significantly.
Control and prediction components of movement planning in stuttering vs. nonstuttering adults
Daliri, Ayoub; Prokopenko, Roman A.; Flanagan, J. Randall; Max, Ludo
2014-01-01
Purpose Stuttering individuals show speech and nonspeech sensorimotor deficiencies. To perform accurate movements, the sensorimotor system needs to generate appropriate control signals and correctly predict their sensory consequences. Using a reaching task, we examined the integrity of these control and prediction components, separately, for movements unrelated to the speech motor system. Method Nine stuttering and nine nonstuttering adults made fast reaching movements to visual targets while sliding an object under the index finger. To quantify control, we determined initial direction error and end-point error. To quantify prediction, we calculated the correlation between vertical and horizontal forces applied to the object—an index of how well vertical force (preventing slip) anticipated direction-dependent variations in horizontal force (moving the object). Results Directional and end-point error were significantly larger for the stuttering group. Both groups performed similarly in scaling vertical force with horizontal force. Conclusions The stuttering group's reduced reaching accuracy suggests limitations in generating control signals for voluntary movements, even for non-orofacial effectors. Typical scaling of vertical force with horizontal force suggests an intact ability to predict the consequences of planned control signals. Stuttering may be associated with generalized deficiencies in planning control signals rather than predicting the consequences of those signals. PMID:25203459
Multiple Cognitive Control Effects of Error Likelihood and Conflict
Brown, Joshua W.
2010-01-01
Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873
NASA Astrophysics Data System (ADS)
Courchesne, Samuel
Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.
NASA Astrophysics Data System (ADS)
Luy, N. T.
2018-04-01
The design of distributed cooperative H∞ optimal controllers for multi-agent systems is a major challenge when the agents' models are uncertain multi-input and multi-output nonlinear systems in strict-feedback form in the presence of external disturbances. In this paper, first, the distributed cooperative H∞ optimal tracking problem is transformed into controlling the cooperative tracking error dynamics in affine form. Second, control schemes and online algorithms are proposed via adaptive dynamic programming (ADP) and the theory of zero-sum differential graphical games. The schemes use only one neural network (NN) for each agent instead of three from ADP to reduce computational complexity as well as avoid choosing initial NN weights for stabilising controllers. It is shown that despite not using knowledge of cooperative internal dynamics, the proposed algorithms not only approximate values to Nash equilibrium but also guarantee all signals, such as the NN weight approximation errors and the cooperative tracking errors in the closed-loop system, to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is shown by simulation results of an application to wheeled mobile multi-robot systems.
Acoustic sensor for real-time control for the inductive heating process
Kelley, John Bruce; Lu, Wei-Yang; Zutavern, Fred J.
2003-09-30
Disclosed is a system and method for providing closed-loop control of the heating of a workpiece by an induction heating machine, including generating an acoustic wave in the workpiece with a pulsed laser; optically measuring displacements of the surface of the workpiece in response to the acoustic wave; calculating a sub-surface material property by analyzing the measured surface displacements; creating an error signal by comparing an attribute of the calculated sub-surface material properties with a desired attribute; and reducing the error signal below an acceptable limit by adjusting, in real-time, as often as necessary, the operation of the inductive heating machine.
Convergence of fractional adaptive systems using gradient approach.
Gallegos, Javier A; Duarte-Mermoud, Manuel A
2017-07-01
Conditions for boundedness and convergence of the output error and the parameter error for various Caputo's fractional order adaptive schemes based on the steepest descent method are derived in this paper. To this aim, the concept of sufficiently exciting signals is introduced, characterized and related to the concept of persistently exciting signals used in the integer order case. An application is designed in adaptive indirect control of integer order systems using fractional equations to adjust parameters. This application is illustrated for a pole placement adaptive problem. Advantages of using fractional adjustment in control adaptive schemes are experimentally obtained. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boehnke, E McKenzie; DeMarco, J; Steers, J
2016-06-15
Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readingsmore » are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hashii, Haruko, E-mail: haruko@pmrc.tsukuba.ac.jp; Hashimoto, Takayuki; Okawa, Ayako
2013-03-01
Purpose: Radiation therapy for cancer may be required for patients with implantable cardiac devices. However, the influence of secondary neutrons or scattered irradiation from high-energy photons (≥10 MV) on implantable cardioverter-defibrillators (ICDs) is unclear. This study was performed to examine this issue in 2 ICD models. Methods and Materials: ICDs were positioned around a water phantom under conditions simulating clinical radiation therapy. The ICDs were not irradiated directly. A control ICD was positioned 140 cm from the irradiation isocenter. Fractional irradiation was performed with 18-MV and 10-MV photon beams to give cumulative in-field doses of 600 Gy and 1600 Gy,more » respectively. Errors were checked after each fraction. Soft errors were defined as severe (change to safety back-up mode), moderate (memory interference, no changes in device parameters), and minor (slight memory change, undetectable by computer). Results: Hard errors were not observed. For the older ICD model, the incidences of severe, moderate, and minor soft errors at 18 MV were 0.75, 0.5, and 0.83/50 Gy at the isocenter. The corresponding data for 10 MV were 0.094, 0.063, and 0 /50 Gy. For the newer ICD model at 18 MV, these data were 0.083, 2.3, and 5.8 /50 Gy. Moderate and minor errors occurred at 18 MV in control ICDs placed 140 cm from the isocenter. The error incidences were 0, 1, and 0 /600 Gy at the isocenter for the newer model, and 0, 1, and 6 /600Gy for the older model. At 10 MV, no errors occurred in control ICDs. Conclusions: ICD errors occurred more frequently at 18 MV irradiation, which suggests that the errors were mainly caused by secondary neutrons. Soft errors of ICDs were observed with high energy photon beams, but most were not critical in the newer model. These errors may occur even when the device is far from the irradiation field.« less
An active co-phasing imaging testbed with segmented mirrors
NASA Astrophysics Data System (ADS)
Zhao, Weirui; Cao, Genrui
2011-06-01
An active co-phasing imaging testbed with high accurate optical adjustment and control in nanometer scale was set up to validate the algorithms of piston and tip-tilt error sensing and real-time adjusting. Modularization design was adopted. The primary mirror was spherical and divided into three sub-mirrors. One of them was fixed and worked as reference segment, the others were adjustable respectively related to the fixed segment in three freedoms (piston, tip and tilt) by using sensitive micro-displacement actuators in the range of 15mm with a resolution of 3nm. The method of twodimension dispersed fringe analysis was used to sense the piston error between the adjacent segments in the range of 200μm with a repeatability of 2nm. And the tip-tilt error was gained with the method of centroid sensing. Co-phasing image could be realized by correcting the errors measured above with the sensitive micro-displacement actuators driven by a computer. The process of co-phasing error sensing and correcting could be monitored in real time by a scrutiny module set in this testbed. A FISBA interferometer was introduced to evaluate the co-phasing performance, and finally a total residual surface error of about 50nm rms was achieved.
NASA Astrophysics Data System (ADS)
He, Bin; Frey, Eric C.
2010-06-01
Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were linear in the shift for both the QSPECT and QPlanar methods. QPlanar was less sensitive to object definition perturbations than QSPECT, especially for dilation and erosion cases. Up to 1 voxel misregistration or misdefinition resulted in up to 8% error in organ activity estimates, with the largest errors for small or low uptake organs. Both types of VOI definition errors produced larger errors in activity estimates for a small and low uptake organs (i.e. -7.5% to 5.3% for the left kidney) than for a large and high uptake organ (i.e. -2.9% to 2.1% for the liver). We observed that misregistration generally had larger effects than misdefinition, with errors ranging from -7.2% to 8.4%. The different imaging methods evaluated responded differently to the errors from misregistration and misdefinition. We found that QSPECT was more sensitive to misdefinition errors, but less sensitive to misregistration errors, as compared to the QPlanar method. Thus, sensitivity to VOI definition errors should be an important criterion in evaluating quantitative imaging methods.
Rong, Hao; Tian, Jin
2015-05-01
The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.
Self-calibration method without joint iteration for distributed small satellite SAR systems
NASA Astrophysics Data System (ADS)
Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan
2013-12-01
The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.
NASA Astrophysics Data System (ADS)
Bergese, P.; Bontempi, E.; Depero, L. E.
2006-10-01
X-ray reflectivity (XRR) is a non-destructive, accurate and fast technique for evaluating film density. Indeed, sample-goniometer alignment is a critical experimental factor and the overriding error source in XRR density determination. With commercial single-wavelength X-ray reflectometers, alignment is difficult to control and strongly depends on the operator. In the present work, the contribution of misalignment on density evaluation error is discussed, and a novel procedure (named XRR-density evaluation or XRR-DE method) to minimize the problem will be presented. The method allows to overcome the alignment step through the extrapolation of the correct density value from appropriate non-specular XRR data sets. This procedure is operator independent and suitable for commercial single-wavelength X-ray reflectometers. To test the XRR-DE method, single crystals of TiO 2 and SrTiO 3 were used. In both cases the determined densities differed from the nominal ones less than 5.5%. Thus, the XRR-DE method can be successfully applied to evaluate the density of thin films for which only optical reflectivity is today used. The advantage is that this method can be considered thickness independent.
Khammarnia, Mohammad; Sharifian, Roxana; Zand, Farid; Barati, Omid; Keshtkaran, Ali; Sabetian, Golnar; Shahrokh, , Nasim; Setoodezadeh, Fatemeh
2017-01-01
Background: One way to reduce medical errors associated with physician orders is computerized physician order entry (CPOE) software. This study was conducted to compare prescription orders between 2 groups before and after CPOE implementation in a hospital. Methods: We conducted a before-after prospective study in 2 intensive care unit (ICU) wards (as intervention and control wards) in the largest tertiary public hospital in South of Iran during 2014 and 2016. All prescription orders were validated by a clinical pharmacist and an ICU physician. The rates of ordering the errors in medical orders were compared before (manual ordering) and after implementation of the CPOE. A standard checklist was used for data collection. For the data analysis, SPSS Version 21, descriptive statistics, and analytical tests such as McNemar, chi-square, and logistic regression were used. Results: The CPOE significantly decreased 2 types of errors, illegible orders and lack of writing the drug form, in the intervention ward compared to the control ward (p< 0.05); however, the 2 errors increased due to the defect in the CPOE (p< 0.001). The use of CPOE decreased the prescription errors from 19% to 3% (p= 0.001), However, no differences were observed in the control ward (p<0.05). In addition, more errors occurred in the morning shift (p< 0.001). Conclusion: In general, the use of CPOE significantly reduced the prescription errors. Nonetheless, more caution should be exercised in the use of this system, and its deficiencies should be resolved. Furthermore, it is recommended that CPOE be used to improve the quality of delivered services in hospitals. PMID:29445698
Khammarnia, Mohammad; Sharifian, Roxana; Zand, Farid; Barati, Omid; Keshtkaran, Ali; Sabetian, Golnar; Shahrokh, Nasim; Setoodezadeh, Fatemeh
2017-01-01
Background: One way to reduce medical errors associated with physician orders is computerized physician order entry (CPOE) software. This study was conducted to compare prescription orders between 2 groups before and after CPOE implementation in a hospital. Methods: We conducted a before-after prospective study in 2 intensive care unit (ICU) wards (as intervention and control wards) in the largest tertiary public hospital in South of Iran during 2014 and 2016. All prescription orders were validated by a clinical pharmacist and an ICU physician. The rates of ordering the errors in medical orders were compared before (manual ordering) and after implementation of the CPOE. A standard checklist was used for data collection. For the data analysis, SPSS Version 21, descriptive statistics, and analytical tests such as McNemar, chi-square, and logistic regression were used. Results: The CPOE significantly decreased 2 types of errors, illegible orders and lack of writing the drug form, in the intervention ward compared to the control ward (p< 0.05); however, the 2 errors increased due to the defect in the CPOE (p< 0.001). The use of CPOE decreased the prescription errors from 19% to 3% (p= 0.001), However, no differences were observed in the control ward (p<0.05). In addition, more errors occurred in the morning shift (p< 0.001). Conclusion: In general, the use of CPOE significantly reduced the prescription errors. Nonetheless, more caution should be exercised in the use of this system, and its deficiencies should be resolved. Furthermore, it is recommended that CPOE be used to improve the quality of delivered services in hospitals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbee, D; McCarthy, A; Galavis, P
Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less
Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array
Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Tao, Yuan
2018-01-01
Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%. PMID:29734742
Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array.
Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Abu-Siada, Ahmed; Tao, Yuan
2018-05-05
Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%.
Using comparative genome analysis to identify problems in annotated microbial genomes.
Poptsova, Maria S; Gogarten, J Peter
2010-07-01
Genome annotation is a tedious task that is mostly done by automated methods; however, the accuracy of these approaches has been questioned since the beginning of the sequencing era. Genome annotation is a multilevel process, and errors can emerge at different stages: during sequencing, as a result of gene-calling procedures, and in the process of assigning gene functions. Missed or wrongly annotated genes differentially impact different types of analyses. Here we discuss and demonstrate how the methods of comparative genome analysis can refine annotations by locating missing orthologues. We also discuss possible reasons for errors and show that the second-generation annotation systems, which combine multiple gene-calling programs with similarity-based methods, perform much better than the first annotation tools. Since old errors may propagate to the newly sequenced genomes, we emphasize that the problem of continuously updating popular public databases is an urgent and unresolved one. Due to the progress in genome-sequencing technologies, automated annotation techniques will remain the main approach in the future. Researchers need to be aware of the existing errors in the annotation of even well-studied genomes, such as Escherichia coli, and consider additional quality control for their results.
Attitude control with realization of linear error dynamics
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1993-01-01
An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.
NASA Astrophysics Data System (ADS)
Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde
2006-03-01
European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.
Improved Beam Jitter Control Methods for High Energy Laser Systems
2009-12-01
Figure 16. The inner loop is a rate control loop composed of a gimbal, power amplifier , controller, and servo components (gyro, motor, and encoder...system characterization experiments 1. WFOV Control Loop a. Resonance Frequency Random signals were applied to the power amplifier and output...Loop Stabilization By applying a disturbance to the input of the power amplifier and measuring torque error, one is able to determine the torque
Signal-Detection Analyses of Conditional Discrimination and Delayed Matching-to-Sample Performance
ERIC Educational Resources Information Center
Alsop, Brent
2004-01-01
Quantitative analyses of stimulus control and reinforcer control in conditional discriminations and delayed matching-to-sample procedures often encounter a problem; it is not clear how to analyze data when subjects have not made errors. The present article examines two common methods for overcoming this problem. Monte Carlo simulations of…
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1979-01-01
Lift interference effects are discussed based on Bland's (1968) integral equation. A mathematical existence theory is utilized for which convergence of the numerical method has been proved for general (square-integrable) downwashes. Airloads are computed using orthogonal airfoil polynomial pairs in conjunction with a collocation method which is numerically equivalent to Galerkin's method and complex least squares. Convergence exhibits exponentially decreasing error with the number n of collocation points for smooth downwashes, whereas errors are proportional to 1/n for discontinuous downwashes. The latter can be reduced to 1/n to the m+1 power with mth-order Richardson extrapolation (by using m = 2, hundredfold error reductions were obtained with only a 13% increase of computer time). Numerical results are presented showing acoustic resonance, as well as the effect of Mach number, ventilation, height-to-chord ratio, and mode shape on wind-tunnel interference. Excellent agreement with experiment is obtained in steady flow, and good agreement is obtained for unsteady flow.
NASA Astrophysics Data System (ADS)
Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram
2018-05-01
The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic errors. These results also demonstrate the high potential of linear features as reliable control features to reach sub-pixel accuracy in registration applications.
Feedback controlled optics with wavefront compensation
NASA Technical Reports Server (NTRS)
Breckenridge, William G. (Inventor); Redding, David C. (Inventor)
1993-01-01
The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.
Efficiency degradation due to tracking errors for point focusing solar collectors
NASA Technical Reports Server (NTRS)
Hughes, R. O.
1978-01-01
An important parameter in the design of point focusing solar collectors is the intercept factor which is a measure of efficiency and of energy available for use in the receiver. Using statistical methods, an expression of the expected value of the intercept factor is derived for various configurations and control law implementations. The analysis assumes that a radially symmetric flux distribution (not necessarily Gaussian) is generated at the focal plane due to the sun's finite image and various reflector errors. The time-varying tracking errors are assumed to be uniformly distributed within the threshold limits and allows the expected value calculation.
Context Specificity of Post-Error and Post-Conflict Cognitive Control Adjustments
Forster, Sarah E.; Cho, Raymond Y.
2014-01-01
There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control. PMID:24603900
Effects of Age-Related Macular Degeneration on Driving Performance
Wood, Joanne M.; Black, Alex A.; Mallon, Kerry; Kwan, Anthony S.; Owsley, Cynthia
2018-01-01
Purpose To explore differences in driving performance of older adults with age-related macular degeneration (AMD) and age-matched controls, and to identify the visual determinants of driving performance in this population. Methods Participants included 33 older drivers with AMD (mean age [M] = 76.6 ± 6.1 years; better eye Age-Related Eye Disease Study grades: early [61%] and intermediate [39%]) and 50 age-matched controls (M = 74.6 ± 5.0 years). Visual tests included visual acuity, contrast sensitivity, visual fields, and motion sensitivity. On-road driving performance was assessed in a dual-brake vehicle by an occupational therapist (masked to drivers' visual status). Outcome measures included driving safety ratings (scale of 1–10, where higher values represented safer driving), types of driving behavior errors, locations at which errors were made, and number of critical errors (CE) requiring an instructor intervention. Results Drivers with AMD were rated as less safe than controls (4.8 vs. 6.2; P = 0.012); safety ratings were associated with AMD severity (early: 5.5 versus intermediate: 3.7), even after adjusting for age. Drivers with AMD had higher CE rates than controls (1.42 vs. 0.36, respectively; rate ratio 3.05, 95% confidence interval 1.47–6.36, P = 0.003) and exhibited more observation, lane keeping, and gap selection errors and made more errors at traffic light–controlled intersections (P < 0.05). Only motion sensitivity was significantly associated with driving safety in the AMD drivers (P = 0.005). Conclusions Drivers with early and intermediate AMD can exhibit impairments in their driving performance, particularly during complex driving situations; motion sensitivity was most strongly associated with driving performance. These findings have important implications for assessing the driving ability of older drivers with visual impairment. PMID:29340641
Chow, Gary C.C.; Yam, Timothy T.T.; Chung, Joanne W.Y.; Fong, Shirley S.M.
2017-01-01
Abstract Background: This single-blinded, three-armed randomized controlled trial aimed to compare the effects of postexercise ice-water immersion (IWI), room-temperature water immersion (RWI), and no water immersion on the balance performance and knee joint proprioception of amateur rugby players. Methods: Fifty-three eligible amateur rugby players (mean age ± standard deviation: 21.6 ± 2.9 years) were randomly assigned to the IWI group (5.3 °C), RWI group (25.0 °C), or the no immersion control group. The participants in each group underwent the same fatigue protocol followed by their allocated recovery intervention, which lasted for 1 minute. Measurements were taken before and after the fatigue-recovery intervention. The primary outcomes were the sensory organization test (SOT) composite equilibrium score (ES) and the condition-specific ES, which were measured using a computerized dynamic posturography machine. The secondary outcome was the knee joint repositioning error. Two-way repeated measures analysis of variance was used to test the effect of water immersion on each outcome variable. Results: There were no significant within- and between-group differences in the SOT composite ESs or the condition-specific ESs. However, there was a group-by-time interaction effect on the knee joint repositioning error. It seems that participants in the RWI group had lower errors over time, but those in the IWI and control groups had increased errors over time. The RWI group had significantly lower error score than the IWI group at postintervention. Conclusion: One minute of postexercise IWI or RWI did not impair rugby players’ sensory organization of balance control. RWI had a less detrimental effect on knee joint proprioception to IWI at postintervention. PMID:28207546
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inaba, Kensuke; Tamaki, Kiyoshi; Igeta, Kazuhiro
2014-12-04
In this study, we propose a method for generating cluster states of atoms in an optical lattice. By utilizing the quantum properties of Wannier orbitals, we create an tunable Ising interaction between atoms without inducing the spin-exchange interactions. We investigate the cause of errors that occur during entanglement generations, and then we propose an error-management scheme, which allows us to create high-fidelity cluster states in a short time.
Chen, Gang; Song, Yongduan; Guan, Yanfeng
2018-03-01
This brief investigates the finite-time consensus tracking control problem for networked uncertain mechanical systems on digraphs. A new terminal sliding-mode-based cooperative control scheme is developed to guarantee that the tracking errors converge to an arbitrarily small bound around zero in finite time. All the networked systems can have different dynamics and all the dynamics are unknown. A neural network is used at each node to approximate the local unknown dynamics. The control schemes are implemented in a fully distributed manner. The proposed control method eliminates some limitations in the existing terminal sliding-mode-based consensus control methods and extends the existing analysis methods to the case of directed graphs. Simulation results on networked robot manipulators are provided to show the effectiveness of the proposed control algorithms.
On the application of photogrammetry to the fitting of jawbone-anchored bridges.
Strid, K G
1985-01-01
Misfit between a jawbone-anchored bridge and the abutments in the patient's jaw may result in, for example, fixture fracture. To achieve improved alignment, the bridge base could be prepared in a numerically-controlled tooling machine using measured abutment coordinates as primary data. For each abutment, the measured values must comprise the coordinates of a reference surface as well as the spatial orientation of the fixture/abutment longitudinal axis. Stereophotogrammetry was assumed to be the measuring method of choice. To assess its potentials, a lower-jaw model with accurately positioned signals was stereophotographed and the films were measured in a stereocomparator. Model-space coordinates, computed from the image coordinates, were compared to the known signal coordinates. The root-mean-square error in position was determined to 0.03-0.08 mm, the maximum individual error amounting to 0.12 mm, whereas the r. m. s. error in axis direction was found to be 0.5-1.5 degrees with a maximum individual error of 1.8 degrees. These errors are of the same order as can be achieved by careful impression techniques. The method could be useful, but because of its complexity, stereophotogrammetry is not recommended as a standard procedure.
A Simple Method to Improve Autonomous GPS Positioning for Tractors
Gomez-Gil, Jaime; Alonso-Garcia, Sergio; Gómez-Gil, Francisco Javier; Stombaugh, Tim
2011-01-01
Error is always present in the GPS guidance of a tractor along a desired trajectory. One way to reduce GPS guidance error is by improving the tractor positioning. The most commonly used ways to do this are either by employing more precise GPS receivers and differential corrections or by employing GPS together with some other local positioning systems such as electronic compasses or Inertial Navigation Systems (INS). However, both are complex and expensive solutions. In contrast, this article presents a simple and low cost method to improve tractor positioning when only a GPS receiver is used as the positioning sensor. The method is based on placing the GPS receiver ahead of the tractor, and on applying kinematic laws of tractor movement, or a geometric approximation, to obtain the midpoint position and orientation of the tractor rear axle more precisely. This precision improvement is produced by the fusion of the GPS data with tractor kinematic control laws. Our results reveal that the proposed method effectively reduces the guidance GPS error along a straight trajectory. PMID:22163917
Position calibration of a 3-DOF hand-controller with hybrid structure
NASA Astrophysics Data System (ADS)
Zhu, Chengcheng; Song, Aiguo
2017-09-01
A hand-controller is a human-robot interactive device, which measures the 3-DOF (Degree of Freedom) position of the human hand and sends it as a command to control robot movement. The device also receives 3-DOF force feedback from the robot and applies it to the human hand. Thus, the precision of 3-DOF position measurements is a key performance factor for hand-controllers. However, when using a hybrid type 3-DOF hand controller, various errors occur and are considered originating from machining and assembly variations within the device. This paper presents a calibration method to improve the position tracking accuracy of hybrid type hand-controllers by determining the actual size of the hand-controller parts. By re-measuring and re-calibrating this kind of hand-controller, the actual size of the key parts that cause errors is determined. Modifying the formula parameters with the actual sizes, which are obtained in the calibrating process, improves the end position tracking accuracy of the device.
Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms
Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun
2011-01-01
This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927
New nonlinear control algorithms for multiple robot arms
NASA Technical Reports Server (NTRS)
Tarn, T. J.; Bejczy, A. K.; Yun, X.
1988-01-01
Multiple coordinated robot arms are modeled by considering the arms as closed kinematic chains and as a force-constrained mechanical system working on the same object simultaneously. In both formulations, a novel dynamic control method is discussed. It is based on feedback linearization and simultaneous output decoupling technique. By applying a nonlinear feedback and a nonlinear coordinate transformation, the complicated model of the multiple robot arms in either formulation is converted into a linear and output decoupled system. The linear system control theory and optimal control theory are used to design robust controllers in the task space. The first formulation has the advantage of automatically handling the coordination and load distribution among the robot arms. In the second formulation, it was found that by choosing a general output equation it became possible simultaneously to superimpose the position and velocity error feedback with the force-torque error feedback in the task space.
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Pérez, Rafael
2017-04-01
The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey densities required to achieve a certain accuracy given the cross-sectional variability of a gully and the measurement method applied. References Casali, J., Loizu, J., Campo, M.A., De Santisteban, L.M., Alvarez-Mozos, J., 2006. Accuracy of methods for field assessment of rill and ephemeral gully erosion. Catena 67, 128-138. doi:10.1016/j.catena.2006.03.005
Yong-Feng Gao; Xi-Ming Sun; Changyun Wen; Wei Wang
2017-07-01
This paper is concerned with the problem of adaptive tracking control for a class of uncertain nonlinear systems with nonsymmetric input saturation and immeasurable states. The radial basis function of neural network (NN) is employed to approximate unknown functions, and an NN state observer is designed to estimate the immeasurable states. To analyze the effect of input saturation, an auxiliary system is employed. By the aid of adaptive backstepping technique, an adaptive tracking control approach is developed. Under the proposed adaptive tracking controller, the boundedness of all the signals in the closed-loop system is achieved. Moreover, distinct from most of the existing references, the tracking error can be bounded by an explicit function of design parameters and saturation input error. Finally, an example is given to show the effectiveness of the proposed method.
Neighboring extremal optimal control design including model mismatch errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.J.; Hull, D.G.
1994-11-01
The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.
Markov Jump-Linear Performance Models for Recoverable Flight Control Computers
NASA Technical Reports Server (NTRS)
Zhang, Hong; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
Single event upsets in digital flight control hardware induced by atmospheric neutrons can reduce system performance and possibly introduce a safety hazard. One method currently under investigation to help mitigate the effects of these upsets is NASA Langley s Recoverable Computer System. In this paper, a Markov jump-linear model is developed for a recoverable flight control system, which will be validated using data from future experiments with simulated and real neutron environments. The method of tracking error analysis and the plan for the experiments are also described.
NASA Astrophysics Data System (ADS)
Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.
2018-02-01
In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Bakri, F. A.; Mashor, M. Y.; Sharun, S. M.; Bibi Sarpinah, S. N.; Abu Bakar, Z.
2016-10-01
This study proposes an adaptive fuzzy controller for attitude control system (ACS) of Innovative Satellite (InnoSAT) based on direct action type structure. In order to study new methods used in satellite attitude control, this paper presents three structures of controllers: Fuzzy PI, Fuzzy PD and conventional Fuzzy PID. The objective of this work is to compare the time response and tracking performance among the three different structures of controllers. The parameters of controller were tuned on-line by adjustment mechanism, which was an approach similar to a PID error that could minimize errors between actual and model reference output. This paper also presents a Model References Adaptive Control (MRAC) as a control scheme to control time varying systems where the performance specifications were given in terms of the reference model. All the controllers were tested using InnoSAT system under some operating conditions such as disturbance, varying gain, measurement noise and time delay. In conclusion, among all considered DA-type structures, AFPID controller was observed as the best structure since it outperformed other controllers in most conditions.
Linearizing feedforward/feedback attitude control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1991-01-01
An approach to attitude control theory is introduced in which a linear form is postulated for the closed-loop rotation error dynamics, then the exact control law required to realize it is derived. The nonminimal (four-component) quaternion form is used to attitude because it is globally nonsingular, but the minimal (three-component) quaternion form is used for attitude error because it has no nonlinear constraints to prevent the rotational error dynamics from being linearized, and the definition of the attitude error is based on quaternion algebra. This approach produces an attitude control law that linearizes the closed-loop rotational error dynamics exactly, without any attitude singularities, even if the control errors become large.
Method to improve the blade tip-timing accuracy of fiber bundle sensor under varying tip clearance
NASA Astrophysics Data System (ADS)
Duan, Fajie; Zhang, Jilong; Jiang, Jiajia; Guo, Haotian; Ye, Dechao
2016-01-01
Blade vibration measurement based on the blade tip-timing method has become an industry-standard procedure. Fiber bundle sensors are widely used for tip-timing measurement. However, the variation of clearance between the sensor and the blade will bring a tip-timing error to fiber bundle sensors due to the change in signal amplitude. This article presents methods based on software and hardware to reduce the error caused by the tip clearance change. The software method utilizes both the rising and falling edges of the tip-timing signal to determine the blade arrival time, and a calibration process suitable for asymmetric tip-timing signals is presented. The hardware method uses an automatic gain control circuit to stabilize the signal amplitude. Experiments are conducted and the results prove that both methods can effectively reduce the impact of tip clearance variation on the blade tip-timing and improve the accuracy of measurements.
Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve
2014-01-01
Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070
Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2002-01-01
A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.
A General Linear Method for Equating with Small Samples
ERIC Educational Resources Information Center
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
USDA-ARS?s Scientific Manuscript database
We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...
Correction of a Technical Error in the Golf Swing: Error Amplification Versus Direct Instruction.
Milanese, Chiara; Corte, Stefano; Salvetti, Luca; Cavedon, Valentina; Agostini, Tiziano
2016-01-01
Performance errors drive motor learning for many tasks. The authors' aim was to determine which of two strategies, method of amplification of error (MAE) or direct instruction (DI), would be more beneficial for error correction during a full golfing swing with a driver. Thirty-four golfers were randomly assigned to one of three training conditions (MAE, DI, and control). Participants were tested in a practice session in which each golfer performed 7 pretraining trials, 6 training-intervention trials, and 7 posttraining trials; and a retention test after 1 week. An optoeletronic motion capture system was used to measure the kinematic parameters of each golfer's performance. Results showed that MAE is an effective strategy for correcting the technical errors leading to a rapid improvement in performance. These findings could have practical implications for sport psychology and physical education because, while practice is obviously necessary for improving learning, the efficacy of the learning process is essential in enhancing learners' motivation and sport enjoyment.
Optimal analytic method for the nonlinear Hasegawa-Mima equation
NASA Astrophysics Data System (ADS)
Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle
2014-05-01
The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.
A method of treating the non-grey error in total emittance measurements
NASA Technical Reports Server (NTRS)
Heaney, J. B.; Henninger, J. H.
1971-01-01
In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.
Deep data fusion method for missile-borne inertial/celestial system
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Chen, Xiaofei; Lu, Jiazhen; Zhang, Hao
2018-05-01
Strap-down inertial-celestial integrated navigation system has the advantages of autonomy and high precision and is very useful for ballistic missiles. The star sensor installation error and inertial measurement error have a great influence for the system performance. Based on deep data fusion, this paper establishes measurement equations including star sensor installation error and proposes the deep fusion filter method. Simulations including misalignment error, star sensor installation error, IMU error are analyzed. Simulation results indicate that the deep fusion method can estimate the star sensor installation error and IMU error. Meanwhile, the method can restrain the misalignment errors caused by instrument errors.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Pérez-Cebrián, M; Font-Noguera, I; Doménech-Moral, L; Bosó-Ribelles, V; Romero-Boyero, P; Poveda-Andrés, J L
2011-01-01
To assess the efficacy of a new quality control strategy based on daily randomised sampling and monitoring a Sentinel Surveillance System (SSS) medication cart, in order to identify medication errors and their origin at different levels of the process. Prospective quality control study with one year follow-up. A SSS medication cart was randomly selected once a week and double-checked before dispensing medication. Medication errors were recorded before it was taken to the relevant hospital ward. Information concerning complaints after receiving medication and 24-hour monitoring were also noted. Type and origin error data were assessed by a Unit Dose Quality Control Group, which proposed relevant improvement measures. Thirty-four SSS carts were assessed, including 5130 medication lines and 9952 dispensed doses, corresponding to 753 patients. Ninety erroneous lines (1.8%) and 142 mistaken doses (1.4%) were identified at the Pharmacy Department. The most frequent error was dose duplication (38%) and its main cause inappropriate management and forgetfulness (69%). Fifty medication complaints (6.6% of patients) were mainly due to new treatment at admission (52%), and 41 (0.8% of all medication lines), did not completely match the prescription (0.6% lines) as recorded by the Pharmacy Department. Thirty-seven (4.9% of patients) medication complaints due to changes at admission and 32 matching errors (0.6% medication lines) were recorded. The main cause also was inappropriate management and forgetfulness (24%). The simultaneous recording of incidences due to complaints and new medication coincided in 33.3%. In addition, 433 (4.3%) of dispensed doses were returned to the Pharmacy Department. After the Unit Dose Quality Control Group conducted their feedback analysis, 64 improvement measures for Pharmacy Department nurses, 37 for pharmacists, and 24 for the hospital ward were introduced. The SSS programme has proven to be useful as a quality control strategy to identify Unit Dose Distribution System errors at initial, intermediate and final stages of the process, improving the involvement of the Pharmacy Department and ward nurses. Copyright © 2009 SEFH. Published by Elsevier Espana. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duncan, M. G.
The suitability of several temperature measurement schemes for an irradiation creep experiment is examined. It is found that the specimen resistance can be used to measure and control the sample temperature if compensated for resistance drift due to radiation and annealing effects. A modified Kelvin bridge is presented that allows compensation for resistance drift by periodically checking the sample resistance at a controlled ambient temperature. A new phase-insensitive method for detecting the bridge error signals is presented. The phase-insensitive detector is formed by averaging the magnitude of two bridge voltages. Although this method is substantially less sensitive to stray reactancesmore » in the bridge than conventional phase-sensitive detectors, it is sensitive to gain stability and linearity of the rectifier circuits. Accuracy limitations of rectifier circuits are examined both theoretically and experimentally in great detail. Both hand analyses and computer simulations of rectifier errors are presented. Finally, the design of a temperature control system based on sample resistance measurement is presented. The prototype is shown to control a 316 stainless steel sample to within a 0.15/sup 0/C short term (10 sec) and a 0.03/sup 0/C long term (10 min) standard deviation at temperatures between 150 and 700/sup 0/C. The phase-insensitive detector typically contributes less than 10 ppM peak resistance measurement error (0.04/sup 0/C at 700/sup 0/C for 316 stainless steel or 0.005/sup 0/C at 150/sup 0/C for zirconium).« less
Automated optimal coordination of multiple-DOF neuromuscular actions in feedforward neuroprostheses.
Lujan, J Luis; Crago, Patrick E
2009-01-01
This paper describes a new method for designing feedforward controllers for multiple-muscle, multiple-DOF, motor system neural prostheses. The design process is based on experimental measurement of the forward input/output properties of the neuromechanical system and numerical optimization of stimulation patterns to meet muscle coactivation criteria, thus resolving the muscle redundancy (i.e., overcontrol) and the coupled DOF problems inherent in neuromechanical systems. We designed feedforward controllers to control the isometric forces at the tip of the thumb in two directions during stimulation of three thumb muscles as a model system. We tested the method experimentally in ten able-bodied individuals and one patient with spinal cord injury. Good control of isometric force in both DOFs was observed, with rms errors less than 10% of the force range in seven experiments and statistically significant correlations between the actual and target forces in all ten experiments. Systematic bias and slope errors were observed in a few experiments, likely due to the neuromuscular fatigue. Overall, the tests demonstrated the ability of a general design approach to satisfy both control and coactivation criteria in multiple-muscle, multiple-axis neuromechanical systems, which is applicable to a wide range of neuromechanical systems and stimulation electrodes.
Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots
Cabanes, Itziar; Mancisidor, Aitziber; Pinto, Charles
2017-01-01
The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software. PMID:28832510
Two Cultures in Modern Science and Technology: For Safety and Validity Does Medicine Have to Update?
Becker, Robert E
2016-01-11
Two different scientific cultures go unreconciled in modern medicine. Each culture accepts that scientific knowledge and technologies are vulnerable to and easily invalidated by methods and conditions of acquisition, interpretation, and application. How these vulnerabilities are addressed separates the 2 cultures and potentially explains medicine's difficulties eradicating errors. A traditional culture, dominant in medicine, leaves error control in the hands of individual and group investigators and practitioners. A competing modern scientific culture accepts errors as inevitable, pernicious, and pervasive sources of adverse events throughout medical research and patient care too malignant for individuals or groups to control. Error risks to the validity of scientific knowledge and safety in patient care require systemwide programming able to support a culture in medicine grounded in tested, continually updated, widely promulgated, and uniformly implemented standards of practice for research and patient care. Experiences from successes in other sciences and industries strongly support the need for leadership from the Institute of Medicine's recommended Center for Patient Safely within the Federal Executive branch of government.
Moreira, Maria E; Hernandez, Caleb; Stevens, Allen D; Jones, Seth; Sande, Margaret; Blumen, Jason R; Hopkins, Emily; Bakes, Katherine; Haukoos, Jason S
2015-08-01
The Institute of Medicine has called on the US health care system to identify and reduce medical errors. Unfortunately, medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients when dosing requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national health care priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared with conventional medication administration, in simulated pediatric emergency department (ED) resuscitation scenarios. We performed a prospective, block-randomized, crossover study in which 10 emergency physician and nurse teams managed 2 simulated pediatric arrest scenarios in situ, using either prefilled, color-coded syringes (intervention) or conventional drug administration methods (control). The ED resuscitation room and the intravenous medication port were video recorded during the simulations. Data were extracted from video review by blinded, independent reviewers. Median time to delivery of all doses for the conventional and color-coded delivery groups was 47 seconds (95% confidence interval [CI] 40 to 53 seconds) and 19 seconds (95% CI 18 to 20 seconds), respectively (difference=27 seconds; 95% CI 21 to 33 seconds). With the conventional method, 118 doses were administered, with 20 critical dosing errors (17%); with the color-coded method, 123 doses were administered, with 0 critical dosing errors (difference=17%; 95% CI 4% to 30%). A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by emergency physician and nurse teams during simulated pediatric ED resuscitations. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Fast spacecraft adaptive attitude tracking control through immersion and invariance design
NASA Astrophysics Data System (ADS)
Wen, Haowei; Yue, Xiaokui; Li, Peng; Yuan, Jianping
2017-10-01
This paper presents a novel non-certainty-equivalence adaptive control method for the attitude tracking control problem of spacecraft with inertia uncertainties. The proposed immersion and invariance (I&I) based adaptation law provides a more direct and flexible approach to circumvent the limitations of the basic I&I method without employing any filter signal. By virtue of the adaptation high-gain equivalence property derived from the proposed adaptive method, the closed-loop adaptive system with a low adaptation gain could recover the high adaptation gain performance of the filter-based I&I method, and the resulting control torque demands during the initial transient has been significantly reduced. A special feature of this method is that the convergence of the parameter estimation error has been observably improved by utilizing an adaptation gain matrix instead of a single adaptation gain value. Numerical simulations are presented to highlight the various benefits of the proposed method compared with the certainty-equivalence-based control method and filter-based I&I control schemes.
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
Generalized Fourier analyses of the advection-diffusion equation - Part I: one-dimensional domains
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Martinez, Mario J.; Voth, Thomas E.
2004-07-01
This paper presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speed, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis provides an automatic process for separating the discrete advective operator into its symmetric and skew-symmetric components and characterizing the spectral behaviour of each operator. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. It is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, the streamline upwind control-volume method, produce both an artificial diffusivity and a concomitant phase speed adjustment in addition to the usual semi-discrete artifacts observed in the phase speed, group speed and diffusivity. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behaviour in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behaviour. In Part II of this paper, we consider two-dimensional semi-discretizations of the advection-diffusion equation and also assess the affects of grid-induced anisotropy observed in the non-dimensional phase speed, and the discrete and artificial diffusivities. Although this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common analysis framework. Published in 2004 by John Wiley & Sons, Ltd.
GOES I/M image navigation and registration
NASA Technical Reports Server (NTRS)
Fiorello, J. L., Jr.; Oh, I. H.; Kelly, K. A.; Ranne, L.
1989-01-01
Image Navigation and Registration (INR) is the system that will be used on future Geostationary Operational Environmental Satellite (GOES) missions to locate and register radiometric imagery data. It consists of a semiclosed loop system with a ground-based segment that generates coefficients to perform image motion compensation (IMC). The IMC coefficients are uplinked to the satellite-based segment, where they are used to adjust the displacement of the imagery data due to movement of the imaging instrument line-of-sight. The flight dynamics aspects of the INR system is discussed in terms of the attitude and orbit determination, attitude pointing, and attitude and orbit control needed to perform INR. The modeling used in the determination of orbit and attitude is discussed, along with the method of on-orbit control used in the INR system, and various factors that affect stability. Also discussed are potential error sources inherent in the INR system and the operational methods of compensating for these errors.
Resource allocation for error resilient video coding over AWGN using optimization approach.
An, Cheolhong; Nguyen, Truong Q
2008-12-01
The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.
Prescribers' expectations and barriers to electronic prescribing of controlled substances
Kim, Meelee; McDonald, Ann; Kreiner, Peter; Kelleher, Stephen J; Blackman, Michael B; Kaufman, Peter N; Carrow, Grant M
2011-01-01
Objective To better understand barriers associated with the adoption and use of electronic prescribing of controlled substances (EPCS), a practice recently established by US Drug Enforcement Administration regulation. Materials and methods Prescribers of controlled substances affiliated with a regional health system were surveyed regarding current electronic prescribing (e-prescribing) activities, current prescribing of controlled substances, and expectations and barriers to the adoption of EPCS. Results 246 prescribers (response rate of 64%) represented a range of medical specialties, with 43.1% of these prescribers current users of e-prescribing for non-controlled substances. Reported issues with controlled substances included errors, pharmacy call-backs, and diversion; most prescribers expected EPCS to address many of these problems, specifically reduce medical errors, improve work flow and efficiency of practice, help identify prescription diversion or misuse, and improve patient treatment management. Prescribers expected, however, that it would be disruptive to practice, and over one-third of respondents reported that carrying a security authentication token at all times would be so burdensome as to discourage adoption. Discussion Although adoption of e-prescribing has been shown to dramatically reduce medication errors, challenges to efficient processes and errors still persist from the perspective of the prescriber, that may interfere with the adoption of EPCS. Most prescribers regarded EPCS security measures as a small or moderate inconvenience (other than carrying a security token), with advantages outweighing the burden. Conclusion Prescribers are optimistic about the potential for EPCS to improve practice, but view certain security measures as a burden and potential barrier. PMID:21946239
Yang, Huiliao; Jiang, Bin; Yang, Hao; Liu, Hugh H T
2018-04-01
The distributed cooperative control strategy is proposed to make the networked nonlinear 3-DOF helicopters achieve the attitude synchronization in the presence of actuator faults and saturations. Based on robust adaptive control, the proposed control method can both compensate the uncertain partial loss of control effectiveness and deal with the system uncertainties. To address actuator saturation problem, the control scheme is designed to ensure that the saturation constraint on the actuation will not be violated during the operation in spite of the actuator faults. It is shown that with the proposed control strategy, both the tracking errors of the leading helicopter and the attitude synchronization errors of each following helicopter are bounded in the existence of faulty actuators and actuator saturations. Moreover, the state responses of the entire group would not exceed the predesigned performance functions which are totally independent from the underlaying interaction topology. Simulation results illustrate the effectiveness of the proposed control scheme. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Comparison of frequency-domain and time-domain rotorcraft vibration control methods
NASA Technical Reports Server (NTRS)
Gupta, N. K.
1984-01-01
Active control of rotor-induced vibration in rotorcraft has received significant attention recently. Two classes of techniques have been proposed. The more developed approach works with harmonic analysis of measured time histories and is called the frequency-domain approach. The more recent approach computes the control input directly using the measured time history data and is called the time-domain approach. The report summarizes the results of a theoretical investigation to compare the two approaches. Five specific areas were addressed: (1) techniques to derive models needed for control design (system identification methods), (2) robustness with respect to errors, (3) transient response, (4) susceptibility to noise, and (5) implementation difficulties. The system identification methods are more difficult for the time-domain models. The time-domain approach is more robust (e.g., has higher gain and phase margins) than the frequency-domain approach. It might thus be possible to avoid doing real-time system identification in the time-domain approach by storing models at a number of flight conditions. The most significant error source is the variation in open-loop vibrations caused by pilot inputs, maneuvers or gusts. The implementation requirements are similar except that the time-domain approach can be much simpler to implement if real-time system identification were not necessary.
Model-free adaptive speed control on travelling wave ultrasonic motor
NASA Astrophysics Data System (ADS)
Di, Sisi; Li, Huafeng
2018-01-01
This paper introduced a new data-driven control (DDC) method for the speed control of ultrasonic motor (USM). The model-free adaptive control (MFAC) strategy was presented in terms of its principles, algorithms, and parameter selection. To verify the efficiency of the proposed method, a speed-frequency-time model, which contained all the measurable nonlinearity and uncertainties based on experimental data was established for simulation to mimic the USM operation system. Furthermore, the model was identified using particle swarm optimization (PSO) method. Then, the control of the simulated system using MFAC was evaluated under different expectations in terms of overshoot, rise time and steady-state error. Finally, the MFAC results were compared with that of proportion iteration differentiation (PID) to demonstrate its advantages in controlling general random system.
An Elimination Method of Temperature-Induced Linear Birefringence in a Stray Current Sensor
Xu, Shaoyi; Li, Wei; Xing, Fangfang; Wang, Yuqiao; Wang, Ruilin; Wang, Xianghui
2017-01-01
In this work, an elimination method of the temperature-induced linear birefringence (TILB) in a stray current sensor is proposed using the cylindrical spiral fiber (CSF), which produces a large amount of circular birefringence to eliminate the TILB based on geometric rotation effect. First, the differential equations that indicate the polarization evolution of the CSF element are derived, and the output error model is built based on the Jones matrix calculus. Then, an accurate search method is proposed to obtain the key parameters of the CSF, including the length of the cylindrical silica rod and the number of the curve spirals. The optimized results are 302 mm and 11, respectively. Moreover, an effective factor is proposed to analyze the elimination of the TILB, which should be greater than 7.42 to achieve the output error requirement that is not greater than 0.5%. Finally, temperature experiments are conducted to verify the feasibility of the elimination method. The results indicate that the output error caused by the TILB can be controlled less than 0.43% based on this elimination method within the range from −20 °C to 40 °C. PMID:28282953
Error image aware content restoration
NASA Astrophysics Data System (ADS)
Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee
2015-12-01
As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.
Instrumental variables vs. grouping approach for reducing bias due to measurement error.
Batistatou, Evridiki; McNamee, Roseanne
2008-01-01
Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.
NASA Astrophysics Data System (ADS)
Sachau, D.; Jukkert, S.; Hövelmann, N.
2016-08-01
This paper presents the development and experimental validation of an ANC (active noise control)-system designed for a particular application in the exhaust line of a submarine. Thereby, tonal components of the exhaust noise in the frequency band from 75 Hz to 120 Hz are reduced by more than 30 dB. The ANC-system is based on the feedforward leaky FxLMS-algorithm. The observability of the sound pressure in standing wave field is ensured by using two error microphones. The noninvasive online plant identification method is used to increase the robustness of the controller. Online plant identification is extended by a time-varying convergence gain to improve the performance in the presence of slight error in the frequency of the reference signal.
Fast and robust control of two interacting spins
NASA Astrophysics Data System (ADS)
Yu, Xiao-Tong; Zhang, Qi; Ban, Yue; Chen, Xi
2018-06-01
Rapid preparation, manipulation, and correction of spin states with high fidelity are requisite for quantum information processing and quantum computing. In this paper, we propose a fast and robust approach for controlling two spins with Heisenberg and Ising interactions. By using the concept of shortcuts to adiabaticity, we first inverse design the driving magnetic fields for achieving fast spin flip or generating the entangled Bell state, and further optimize them with respect to the error and fluctuation. In particular, the designed shortcut protocols can efficiently suppress the unwanted transition or control error induced by anisotropic antisymmetric Dzyaloshinskii-Moriya exchange. Several examples and comparisons are illustrated, showing the advantages of our methods. Finally, we emphasize that the results can be naturally extended to multiple interacting spins and other quantum systems in an analogous fashion.
NASA Astrophysics Data System (ADS)
Zhang, Fan; Liu, Pinkuan
2018-04-01
In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.
Lobach, Irvna; Fan, Ruzone; Carroll, Raymond T.
2011-01-01
With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequihbrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development. PMID:21031455
Proximity Navigation of Highly Constrained Spacecraft
NASA Technical Reports Server (NTRS)
Scarritt, S.; Swartwout, M.
2007-01-01
Bandit is a 3-kg automated spacecraft in development at Washington University in St. Louis. Bandit's primary mission is to demonstrate proximity navigation, including docking, around a 25-kg student-built host spacecraft. However, because of extreme constraints in mass, power and volume, traditional sensing and actuation methods are not available. In particular, Bandit carries only 8 fixed-magnitude cold-gas thrusters to control its 6 DOF motion. Bandit lacks true inertial sensing, and the ability to sense position relative to the host has error bounds that approach the size of the Bandit itself. Some of the navigation problems are addressed through an extremely robust, error-tolerant soft dock. In addition, we have identified a control methodology that performs well in this constrained environment: behavior-based velocity potential functions, which use a minimum-seeking method similar to Lyapunov functions. We have also adapted the discrete Kalman filter for use on Bandit for position estimation and have developed a similar measurement vs. propagation weighting algorithm for attitude estimation. This paper provides an overview of Bandit and describes the control and estimation approach. Results using our 6DOF flight simulator are provided, demonstrating that these methods show promise for flight use.
NASA Technical Reports Server (NTRS)
Larson, T. J.; Ehernberger, L. J.
1985-01-01
The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Shu, L.; Kasami, T.
1985-01-01
A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Lin, S.
1985-01-01
A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.
Syndromic surveillance for health information system failures: a feasibility study
Ong, Mei-Sing; Magrabi, Farah; Coiera, Enrico
2013-01-01
Objective To explore the applicability of a syndromic surveillance method to the early detection of health information technology (HIT) system failures. Methods A syndromic surveillance system was developed to monitor a laboratory information system at a tertiary hospital. Four indices were monitored: (1) total laboratory records being created; (2) total records with missing results; (3) average serum potassium results; and (4) total duplicated tests on a patient. The goal was to detect HIT system failures causing: data loss at the record level; data loss at the field level; erroneous data; and unintended duplication of data. Time-series models of the indices were constructed, and statistical process control charts were used to detect unexpected behaviors. The ability of the models to detect HIT system failures was evaluated using simulated failures, each lasting for 24 h, with error rates ranging from 1% to 35%. Results In detecting data loss at the record level, the model achieved a sensitivity of 0.26 when the simulated error rate was 1%, while maintaining a specificity of 0.98. Detection performance improved with increasing error rates, achieving a perfect sensitivity when the error rate was 35%. In the detection of missing results, erroneous serum potassium results and unintended repetition of tests, perfect sensitivity was attained when the error rate was as small as 5%. Decreasing the error rate to 1% resulted in a drop in sensitivity to 0.65–0.85. Conclusions Syndromic surveillance methods can potentially be applied to monitor HIT systems, to facilitate the early detection of failures. PMID:23184193
Modified fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1992-01-01
A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.
NASA Astrophysics Data System (ADS)
Wu, Lifu; Qiu, Xiaojun; Burnett, Ian S.; Guo, Yecai
2015-08-01
Hybrid feedforward and feedback structures are useful for active noise control (ANC) applications where the noise can only be partially obtained with reference sensors. The traditional method uses the secondary signals of both the feedforward and feedback structures to synthesize a reference signal for the feedback structure in the hybrid structure. However, this approach introduces coupling between the feedforward and feedback structures and parameter changes in one structure affect the other during adaptation such that the feedforward and feedback structures must be optimized simultaneously in practical ANC system design. Two methods are investigated in this paper to remove such coupling effects. One is a simplified method, which uses the error signal directly as the reference signal in the feedback structure, and the second method generates the reference signal for the feedback structure by using only the secondary signal from the feedback structure and utilizes the generated reference signal as the error signal of the feedforward structure. Because the two decoupling methods can optimize the feedforward and feedback structures separately, they provide more flexibility in the design and optimization of the adaptive filters in practical ANC applications.
Bai, Mingsian R; Wen, Jheng-Ciang; Hsu, Hoshen; Hua, Yi-Hsin; Hsieh, Yu-Hao
2014-10-01
A sound reconstruction system is proposed for audio reproduction with extended sweet spot and reduced reflections. An equivalent source method (ESM)-based sound field synthesis (SFS) approach, with the aid of dark zone minimization is adopted in the study. Conventional SFS that is based on the free-field assumption suffers from synthesis error due to boundary reflections. To tackle the problem, the proposed system utilizes convex optimization in designing array filters with both reproduction performance and acoustic contrast taken into consideration. Control points are deployed in the dark zone to minimize the reflections from the walls. Two approaches are employed to constrain the pressure and velocity in the dark zone. Pressure matching error (PME) and acoustic contrast (AC) are used as performance measures in simulations and experiments for a rectangular loudspeaker array. Perceptual Evaluation of Audio Quality (PEAQ) is also used to assess the audio reproduction quality. The results show that the pressure-constrained (PC) method yields better acoustic contrast, but poorer reproduction performance than the pressure-velocity constrained (PVC) method. A subjective listening test also indicates that the PVC method is the preferred method in a live room.
Adaptive control system for pulsed megawatt klystrons
Bolie, Victor W.
1992-01-01
The invention provides an arrangement for reducing waveform errors such as errors in phase or amplitude in output pulses produced by pulsed power output devices such as klystrons by generating an error voltage representing the extent of error still present in the trailing edge of the previous output pulse, using the error voltage to provide a stored control voltage, and applying the stored control voltage to the pulsed power output device to limit the extent of error in the leading edge of the next output pulse.
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
Error Recovery in the Time-Triggered Paradigm with FTT-CAN.
Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís
2018-01-11
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.
NASA Astrophysics Data System (ADS)
Jia, Mei-Hui; Wang, Cheng-Lin; Ren, Bin
2017-07-01
Stress, strain and vibration characteristics of rotor parts should be changed significantly under high acceleration, manufacturing error is one of the most important reason. However, current research on this problem has not been carried out. A rotor with an acceleration of 150,000 g is considered as the objective, the effects of manufacturing errors on rotor mechanical properties and dynamic characteristics are executed by the selection of the key affecting factors. Through the force balance equation of the rotor infinitesimal unit establishment, a theoretical model of stress calculation based on slice method is proposed and established, a formula for the rotor stress at any point derives. A finite element model (FEM) of rotor with holes is established with manufacturing errors. The changes of the stresses and strains of a rotor in parallelism and symmetry errors are analyzed, which verify the validity of the theoretical model. The pre-stressing modal analysis is performed based on the aforementioned static analysis. The key dynamic characteristics are analyzed. The results demonstrated that, as the parallelism and symmetry errors increase, the equivalent stresses and strains of the rotor slowly increase linearly, the highest growth rate does not exceed 4%, the maximum change rate of natural frequency is 0.1%. The rotor vibration mode is not significantly affected. The FEM construction method of the rotor with manufacturing errors can be utilized for the quantitative research on rotor characteristics, which will assist in the active control of rotor component reliability under high acceleration.
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
Error Recovery in the Time-Triggered Paradigm with FTT-CAN
Pedreiras, Paulo; Almeida, Luís
2018-01-01
Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots. PMID:29324723
Reduction in Chemotherapy Mixing Errors Using Six Sigma: Illinois CancerCare Experience.
Heard, Bridgette; Miller, Laura; Kumar, Pankaj
2012-03-01
Chemotherapy mixing errors (CTMRs), although rare, have serious consequences. Illinois CancerCare is a large practice with multiple satellite offices. The goal of this study was to reduce the number of CTMRs using Six Sigma methods. A Six Sigma team consisting of five participants (registered nurses and pharmacy technicians [PTs]) was formed. The team had 10 hours of Six Sigma training in the DMAIC (ie, Define, Measure, Analyze, Improve, Control) process. Measurement of errors started from the time the CT order was verified by the PT to the time of CT administration by the nurse. Data collection included retrospective error tracking software, system audits, and staff surveys. Root causes of CTMRs included inadequate knowledge of CT mixing protocol, inconsistencies in checking methods, and frequent changes in staffing of clinics. Initial CTMRs (n = 33,259) constituted 0.050%, with 77% of these errors affecting patients. The action plan included checklists, education, and competency testing. The postimplementation error rate (n = 33,376, annualized) over a 3-month period was reduced to 0.019%, with only 15% of errors affecting patients. Initial Sigma was calculated at 4.2; this process resulted in the improvement of Sigma to 5.2, representing a 100-fold reduction. Financial analysis demonstrated a reduction in annualized loss of revenue (administration charges and drug wastage) from $11,537.95 (Medicare Average Sales Price) before the start of the project to $1,262.40. The Six Sigma process is a powerful technique in the reduction of CTMRs.
Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy
Cohen, E. A. K.; Ober, R. J.
2014-01-01
We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somayaji, Anil B.; Amai, Wendy A.; Walther, Eleanor A.
This reports describes the successful extension of artificial immune systems from the domain of computer security to the domain of real time control systems for robotic vehicles. A biologically-inspired computer immune system was added to the control system of two different mobile robots. As an additional layer in a multi-layered approach, the immune system is complementary to traditional error detection and error handling techniques. This can be thought of as biologically-inspired defense in depth. We demonstrated an immune system can be added with very little application developer effort, resulting in little to no performance impact. The methods described here aremore » extensible to any system that processes a sequence of data through a software interface.« less
Verbal suppression and strategy use: a role for the right lateral prefrontal cortex?
Robinson, Gail A; Cipolotti, Lisa; Walker, David G; Biggs, Vivien; Bozzali, Marco; Shallice, Tim
2015-04-01
Verbal initiation, suppression and strategy generation/use are cognitive processes widely held to be supported by the frontal cortex. The Hayling Test was designed to tap these cognitive processes within the same sentence completion task. There are few studies specifically investigating the neural correlates of the Hayling Test but it has been primarily used to detect frontal lobe damage. This study investigates the components of the Hayling Test in a large sample of patients with unselected focal frontal (n = 60) and posterior (n = 30) lesions. Patients and controls (n = 40) matched for education, age and sex were administered the Hayling Test as well as background cognitive tests. The standard Hayling Test clinical measures (initiation response time, suppression response time, suppression errors and overall score), composite errors scores and strategy-based responses were calculated. Lesions were analysed by classical frontal/posterior subdivisions as well as a finer-grained frontal localization method and a specific contrast method that is somewhat analogous to voxel-based lesion mapping methods. Thus, patients with right lateral, left lateral and superior medial lesions were compared to controls and patients with right lateral lesions were compared to all other patients. The results show that all four standard Hayling Test clinical measures are sensitive to frontal lobe damage although only the suppression error and overall scores were specific to the frontal region. Although all frontal patients produced blatant suppression errors, a specific right lateral frontal effect was revealed for producing errors that were subtly wrong. In addition, frontal patients overall produced fewer correct responses indicative of developing an appropriate strategy but only the right lateral group showed a significant deficit. This problem in strategy attainment and implementation could explain, at least in part, the suppression error impairment. Contrary to previous studies there was no specific frontal effect for verbal initiation. Overall, our results support a role for the right lateral frontal region in verbal suppression and, for the first time, in strategy generation/use. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Position control of an industrial robot using fractional order controller
NASA Astrophysics Data System (ADS)
Clitan, Iulia; Muresan, Vlad; Abrudean, Mihail; Clitan, Andrei; Miron, Radu
2017-02-01
This paper presents the design of a control structure that ensures no overshoot for the movement of an industrial robot, used for the evacuation of round steel blocks from inside a rotary hearth furnace. First, a mathematical model for the positioning system is derived from a set of experimental data, and further, the paper focuses on obtaining a PID type controller, using the relay method as tuning method in order to obtain a stable closed loop system. The controller parameters are further tuned in order to achieve the imposed set of performances for the positioning of the industrial robot through computer simulation, using trial and error method. Further, a fractional - order PID controller is obtained in order to improve the control signal variation, so as to fit within the range of unified current's variation, 4 to 20 mA.
NASA Technical Reports Server (NTRS)
Joiner, J.; Dee, D. P.
1998-01-01
One of the outstanding problems in data assimilation has been and continues to be how best to utilize satellite data while balancing the tradeoff between accuracy and computational cost. A number of weather prediction centers have recently achieved remarkable success in improving their forecast skill by changing the method by which satellite data are assimilated into the forecast model from the traditional approach of assimilating retrievals to the direct assimilation of radiances in a variational framework. The operational implementation of such a substantial change in methodology involves a great number of technical details, e.g., pertaining to quality control procedures, systematic error correction techniques, and tuning of the statistical parameters in the analysis algorithm. Although there are clear theoretical advantages to the direct radiance assimilation approach, it is not obvious at all to what extent the improvements that have been obtained so far can be attributed to the change in methodology, or to various technical aspects of the implementation. The issue is of interest because retrieval assimilation retains many practical and logistical advantages which may become even more significant in the near future when increasingly high-volume data sources become available. The central question we address here is: how much improvement can we expect from assimilating radiances rather than retrievals, all other things being equal? We compare the two approaches in a simplified one-dimensional theoretical framework, in which problems related to quality control and systematic error correction are conveniently absent. By assuming a perfect radiative transfer model and perfect knowledge of radiance and background error covariances, we are able to formulate a nonlinear local error analysis for each assimilation method. Direct radiance assimilation is optimal in this idealized context, while the traditional method of assimilating retrievals is suboptimal because it ignores the cross-covariances between background errors and retrieval errors. We show that interactive retrieval assimilation (where the same background used for assimilation is also used in the retrieval step) is equivalent to direct assimilation of radiances with suboptimal analysis weights. We illustrate and extend these theoretical arguments with several one-dimensional assimilation experiments, where we estimate vertical atmospheric profiles using simulated data from both the High-resolution InfraRed Sounder 2 (HIRS2) and the future Atmospheric InfraRed Sounder (AIRS).
NASA Astrophysics Data System (ADS)
Tang, T. F.; Chong, S. H.
2017-06-01
This paper presents a practical controller design method for ultra-precision positioning of pneumatic artificial muscle actuator stages. Pneumatic artificial muscle (PAM) actuators are safe to use and have numerous advantages which have brought these actuators to wide applications. However, PAM exhibits strong non-linear characteristics, and these limitations lead to low controllability and limit its application. In practice, the non-linear characteristics of PAM mechanism are difficult to be precisely modeled, and time consuming to model them accurately. The purpose of the present study is to clarify a practical controller design method that emphasizes a simple design procedure that does not acquire plants parameters modeling, and yet is able to demonstrate ultra-precision positioning performance for a PAM driven stage. The practical control approach adopts continuous motion nominal characteristic trajectory following (CM NCTF) control as the feedback controller. The constructed PAM driven stage is in low damping characteristic and causes severe residual vibration that deteriorates motion accuracy of the system. Therefore, the idea to increase the damping characteristic by having an acceleration feedback compensation to the plant has been proposed. The effectiveness of the proposed controller was verified experimentally and compared with a classical PI controller in point-to-point motion. The experiment results proved that the CM NCTF controller demonstrates better positioning performance in smaller motion error than the PI controller. Overall, the CM NCTF controller has successfully to reduce motion error to 3µm, which is 88.7% smaller than the PI controller.
Image registration assessment in radiotherapy image guidance based on control chart monitoring.
Xia, Wenyao; Breen, Stephen L
2018-04-01
Image guidance with cone beam computed tomography in radiotherapy can guarantee the precision and accuracy of patient positioning prior to treatment delivery. During the image guidance process, operators need to take great effort to evaluate the image guidance quality before correcting a patient's position. This work proposes an image registration assessment method based on control chart monitoring to reduce the effort taken by the operator. According to the control chart plotted by daily registration scores of each patient, the proposed method can quickly detect both alignment errors and image quality inconsistency. Therefore, the proposed method can provide a clear guideline for the operators to identify unacceptable image quality and unacceptable image registration with minimal effort. Experimental results demonstrate that by using control charts from a clinical database of 10 patients undergoing prostate radiotherapy, the proposed method can quickly identify out-of-control signals and find special cause of out-of-control registration events.
Nonlinear Tracking Control of a Conductive Supercoiled Polymer Actuator.
Luong, Tuan Anh; Cho, Kyeong Ho; Song, Min Geun; Koo, Ja Choon; Choi, Hyouk Ryeol; Moon, Hyungpil
2018-04-01
Artificial muscle actuators made from commercial nylon fishing lines have been recently introduced and shown as a new type of actuator with high performance. However, the actuators also exhibit significant nonlinearities, which make them difficult to control, especially in precise trajectory-tracking applications. In this article, we present a nonlinear mathematical model of a conductive supercoiled polymer (SCP) actuator driven by Joule heating for model-based feedback controls. Our efforts include modeling of the hysteresis behavior of the actuator. Based on nonlinear modeling, we design a sliding mode controller for SCP actuator-driven manipulators. The system with proposed control law is proven to be asymptotically stable using the Lyapunov theory. The control performance of the proposed method is evaluated experimentally and compared with that of a proportional-integral-derivative (PID) controller through one-degree-of-freedom SCP actuator-driven manipulators. Experimental results show that the proposed controller's performance is superior to that of a PID controller, such as the tracking errors are nearly 10 times smaller compared with those of a PID controller, and it is more robust to external disturbances such as sensor noise and actuator modeling error.
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
Second order sliding mode control for a quadrotor UAV.
Zheng, En-Hui; Xiong, Jing-Jing; Luo, Ji-Liang
2014-07-01
A method based on second order sliding mode control (2-SMC) is proposed to design controllers for a small quadrotor UAV. For the switching sliding manifold design, the selection of the coefficients of the switching sliding manifold is in general a sophisticated issue because the coefficients are nonlinear. In this work, in order to perform the position and attitude tracking control of the quadrotor perfectly, the dynamical model of the quadrotor is divided into two subsystems, i.e., a fully actuated subsystem and an underactuated subsystem. For the former, a sliding manifold is defined by combining the position and velocity tracking errors of one state variable, i.e., the sliding manifold has two coefficients. For the latter, a sliding manifold is constructed via a linear combination of position and velocity tracking errors of two state variables, i.e., the sliding manifold has four coefficients. In order to further obtain the nonlinear coefficients of the sliding manifold, Hurwitz stability analysis is used to the solving process. In addition, the flight controllers are derived by using Lyapunov theory, which guarantees that all system state trajectories reach and stay on the sliding surfaces. Extensive simulation results are given to illustrate the effectiveness of the proposed control method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Nevitt, Jonathan; Hancock, Gregory R.
2001-01-01
Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…