Analysis technique for controlling system wavefront error with active/adaptive optics
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, H.; Lin, S.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Smith, Mark S.; Morelli, Eugene A.
2003-01-01
Near real-time stability and control derivative extraction is required to support flight demonstration of Intelligent Flight Control System (IFCS) concepts being developed by NASA, academia, and industry. Traditionally, flight maneuvers would be designed and flown to obtain stability and control derivative estimates using a postflight analysis technique. The goal of the IFCS concept is to be able to modify the control laws in real time for an aircraft that has been damaged in flight. In some IFCS implementations, real-time parameter identification (PID) of the stability and control derivatives of the damaged aircraft is necessary for successfully reconfiguring the control system. This report investigates the usefulness of Prescribed Simultaneous Independent Surface Excitations (PreSISE) to provide data for rapidly obtaining estimates of the stability and control derivatives. Flight test data were analyzed using both equation-error and output-error PID techniques. The equation-error PID technique is known as Fourier Transform Regression (FTR) and is a frequency-domain real-time implementation. Selected results were compared with a time-domain output-error technique. The real-time equation-error technique combined with the PreSISE maneuvers provided excellent derivative estimation in the longitudinal axis. However, the PreSISE maneuvers as presently defined were not adequate for accurate estimation of the lateral-directional derivatives.
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
Reed Solomon codes for error control in byte organized computer memory systems
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Error control for reliable digital data transmission and storage systems
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, R. H.
1985-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Transmission and storage of medical images with patient information.
Acharya U, Rajendra; Subbanna Bhat, P; Kumar, Sathish; Min, Lim Choo
2003-07-01
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The text data is encrypted before interleaving with images to ensure greater security. The graphical signals are interleaved with the image. Two types of error control-coding techniques are proposed to enhance reliability of transmission and storage of medical images interleaved with patient information. Transmission and storage scenarios are simulated with and without error control coding and a qualitative as well as quantitative interpretation of the reliability enhancement resulting from the use of various commonly used error control codes such as repetitive, and (7,4) Hamming code is provided.
Multisensor systems today and tomorrow: Machine control, diagnosis and thermal compensation
NASA Astrophysics Data System (ADS)
Nunzio, D'Addea
2000-05-01
Multisensor techniques that deal with control of tribology test rig and with diagnosis and thermal error compensation of machine tools are the starting point for some consideration about the use of these techniques as in fuzzy and neural net systems. The author comes to conclusion that anticipatory systems and multisensor techniques will have in the next future a great improvement and a great development mainly in the thermal error compensation of machine tools.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Active Control of Inlet Noise on the JT15D Turbofan Engine
NASA Technical Reports Server (NTRS)
Smith, Jerome P.; Hutcheson, Florence V.; Burdisso, Ricardo A.; Fuller, Chris R.
1999-01-01
This report presents the key results obtained by the Vibration and Acoustics Laboratories at Virginia Tech over the year from November 1997 to December 1998 on the Active Noise Control of Turbofan Engines research project funded by NASA Langley Research Center. The concept of implementing active noise control techniques with fuselage-mounted error sensors is investigated both analytically and experimentally. The analytical part of the project involves the continued development of an advanced modeling technique to provide prediction and design guidelines for application of active noise control techniques to large, realistic high bypass engines of the type on which active control methods are expected to be applied. Results from the advanced analytical model are presented that show the effectiveness of the control strategies, and the analytical results presented for fuselage error sensors show good agreement with the experimentally observed results and provide additional insight into the control phenomena. Additional analytical results are presented for active noise control used in conjunction with a wavenumber sensing technique. The experimental work is carried out on a running JT15D turbofan jet engine in a test stand at Virginia Tech. The control strategy used in these tests was the feedforward Filtered-X LMS algorithm. The control inputs were supplied by single and multiple circumferential arrays of acoustic sources equipped with neodymium iron cobalt magnets mounted upstream of the fan. The reference signal was obtained from an inlet mounted eddy current probe. The error signals were obtained from a number of pressure transducers flush-mounted in a simulated fuselage section mounted in the engine test cell. The active control methods are investigated when implemented with the control sources embedded within the acoustically absorptive material on a passively-lined inlet. The experimental results show that the combination of active control techniques with fuselage-mounted error sensors and passive control techniques is an effective means of reducing radiated noise from turbofan engines. Strategic selection of the location of the error transducers is shown to be effective for reducing the radiation towards particular directions in the farfield. An analytical model is used to predict the behavior of the control system and to guide the experimental design configurations, and the analytical results presented show good agreement with the experimentally observed results.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Linear quadratic stochastic control of atomic hydrogen masers.
Koppang, P; Leland, R
1999-01-01
Data are given showing the results of using the linear quadratic Gaussian (LQG) technique to steer remote hydrogen masers to Coordinated Universal Time (UTC) as given by the United States Naval Observatory (USNO) via two-way satellite time transfer and the Global Positioning System (GPS). Data also are shown from the results of steering a hydrogen maser to the real-time USNO mean. A general overview of the theory behind the LQG technique also is given. The LQG control is a technique that uses Kalman filtering to estimate time and frequency errors used as input into a control calculation. A discrete frequency steer is calculated by minimizing a quadratic cost function that is dependent on both the time and frequency errors and the control effort. Different penalties, chosen by the designer, are assessed by the controller as the time and frequency errors and control effort vary from zero. With this feature, controllers can be designed to force the time and frequency differences between two standards to zero, either more or less aggressively depending on the application.
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Choi, G.; Iyer, R. K.
1990-01-01
A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.
Data mining of air traffic control operational errors
DOT National Transportation Integrated Search
2006-01-01
In this paper we present the results of : applying data mining techniques to identify patterns and : anomalies in air traffic control operational errors (OEs). : Reducing the OE rate is of high importance and remains a : challenge in the aviation saf...
NASA Technical Reports Server (NTRS)
Buechler, W.; Tucker, A. G.
1981-01-01
Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.
Cascade control of superheated steam temperature with neuro-PID controller.
Zhang, Jianhua; Zhang, Fenfang; Ren, Mifeng; Hou, Guolian; Fang, Fang
2012-11-01
In this paper, an improved cascade control methodology for superheated processes is developed, in which the primary PID controller is implemented by neural networks trained by minimizing error entropy criterion. The entropy of the tracking error can be estimated recursively by utilizing receding horizon window technique. The measurable disturbances in superheated processes are input to the neuro-PID controller besides the sequences of tracking error in outer loop control system, hence, feedback control is combined with feedforward control in the proposed neuro-PID controller. The convergent condition of the neural networks is analyzed. The implementation procedures of the proposed cascade control approach are summarized. Compared with the neuro-PID controller using minimizing squared error criterion, the proposed neuro-PID controller using minimizing error entropy criterion may decrease fluctuations of the superheated steam temperature. A simulation example shows the advantages of the proposed method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Suit, W. T.; Cannaday, R. L.
1979-01-01
The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.
Model Reduction for Control System Design
NASA Technical Reports Server (NTRS)
Enns, D. F.
1985-01-01
An approach and a technique for effectively obtaining reduced order mathematical models of a given large order model for the purposes of synthesis, analysis and implementation of control systems is developed. This approach involves the use of an error criterion which is the H-infinity norm of a frequency weighted error between the full and reduced order models. The weightings are chosen to take into account the purpose for which the reduced order model is intended. A previously unknown error bound in the H-infinity norm for reduced order models obtained from internally balanced realizations was obtained. This motivated further development of the balancing technique to include the frequency dependent weightings. This resulted in the frequency weighted balanced realization and a new model reduction technique. Two approaches to designing reduced order controllers were developed. The first involves reducing the order of a high order controller with an appropriate weighting. The second involves linear quadratic Gaussian synthesis based on a reduced order model obtained with an appropriate weighting.
Online Error Reporting for Managing Quality Control Within Radiology.
Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L
2016-06-01
Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate.
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1981-01-01
A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.
Digital implementation of a laser frequency stabilisation technique in the telecommunications band
NASA Astrophysics Data System (ADS)
Jivan, Pritesh; van Brakel, Adriaan; Manuel, Rodolfo Martínez; Grobler, Michael
2016-02-01
Laser frequency stabilisation in the telecommunications band was realised using the Pound-Drever-Hall (PDH) error signal. The transmission spectrum of the Fabry-Perot cavity was used as opposed to the traditionally used reflected spectrum. A comparison was done using an analogue as well as a digitally implemented system. This study forms part of an initial step towards developing a portable optical time and frequency standard. The frequency discriminator used in the experimental setup was a fibre-based Fabry-Perot etalon. The phase sensitive system made use of the optical heterodyne technique to detect changes in the phase of the system. A lock-in amplifier was used to filter and mix the input signals to generate the error signal. This error signal may then be used to generate a control signal via a PID controller. An error signal was realised at a wavelength of 1556 nm which correlates to an optical frequency of 1.926 THz. An implementation of the analogue PDH technique yielded an error signal with a bandwidth of 6.134 GHz, while a digital implementation yielded a bandwidth of 5.774 GHz.
Error Sources in Asteroid Astrometry
NASA Technical Reports Server (NTRS)
Owen, William M., Jr.
2000-01-01
Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.
Self-regulating proportionally controlled heating apparatus and technique
NASA Technical Reports Server (NTRS)
Strange, M. G. (Inventor)
1975-01-01
A self-regulating proportionally controlled heating apparatus and technique is provided wherein a single electrical resistance heating element having a temperature coefficient of resistance serves simultaneously as a heater and temperature sensor. The heating element is current-driven and the voltage drop across the heating element is monitored and a component extracted which is attributable to a change in actual temperature of the heating element from a desired reference temperature, so as to produce a resulting error signal. The error signal is utilized to control the level of the heater drive current and the actual heater temperature in a direction to reduce the noted temperature difference. The continuous nature of the process for deriving the error signal feedback information results in true proportional control of the heating element without the necessity for current-switching which may interfere with nearby sensitive circuits, and with no cyclical variation in the controlled temperature.
Evaluating the technique of using inhalation device in COPD and bronchial asthma patients.
Arora, Piyush; Kumar, Lokender; Vohra, Vikram; Sarin, Rohit; Jaiswal, Anand; Puri, M M; Rathee, Deepti; Chakraborty, Pitambar
2014-07-01
In asthma management, poor handling of inhalation devices and wrong inhalation technique are associated with decreased medication delivery and poor disease control. The key to overcome the drawbacks in inhalation technique is to make patients familiar with issues related to correct use and performance of these medical devices. The objective of this study was to evaluate and analyse technique of use of the inhalation device used by patients of COPD and Bronchial Asthma. A total of 300 cases of BA or COPD patients using different types of inhalation devices were included in this observational study. Data were captured using a proforma and were analysed using SPSS version 15.0. Out of total 300 enrolled patients, 247 (82.3%) made at least one error. Maximum errors observed in subjects using MDI (94.3%), followed by DPI (82.3%), MDI with Spacer (78%) while Nebulizer users (70%) made least number of errors (p = 0.005). Illiterate patients showed 95.2% error while post-graduate and professionals showed 33.3%. This difference was statistically significant (p < 0.001). Self-educated patients committed 100% error, while those trained by a doctor made 56.3% error. Majority of patients using inhalation devices made errors while using the device. Proper education to patients on correct usage may not only improve control of the symptoms of the disease but might also allow dose reduction in long term. Copyright © 2014 Elsevier Ltd. All rights reserved.
Historical shoreline mapping (I): improving techniques and reducing positioning errors
Thieler, E. Robert; Danforth, William W.
1994-01-01
A critical need exists among coastal researchers and policy-makers for a precise method to obtain shoreline positions from historical maps and aerial photographs. A number of methods that vary widely in approach and accuracy have been developed to meet this need. None of the existing methods, however, address the entire range of cartographic and photogrammetric techniques required for accurate coastal mapping. Thus, their application to many typical shoreline mapping problems is limited. In addition, no shoreline mapping technique provides an adequate basis for quantifying the many errors inherent in shoreline mapping using maps and air photos. As a result, current assessments of errors in air photo mapping techniques generally (and falsely) assume that errors in shoreline positions are represented by the sum of a series of worst-case assumptions about digitizer operator resolution and ground control accuracy. These assessments also ignore altogether other errors that commonly approach ground distances of 10 m. This paper provides a conceptual and analytical framework for improved methods of extracting geographic data from maps and aerial photographs. We also present a new approach to shoreline mapping using air photos that revises and extends a number of photogrammetric techniques. These techniques include (1) developing spatially and temporally overlapping control networks for large groups of photos; (2) digitizing air photos for use in shoreline mapping; (3) preprocessing digitized photos to remove lens distortion and film deformation effects; (4) simultaneous aerotriangulation of large groups of spatially and temporally overlapping photos; and (5) using a single-ray intersection technique to determine geographic shoreline coordinates and express the horizontal and vertical error associated with a given digitized shoreline. As long as historical maps and air photos are used in studies of shoreline change, there will be a considerable amount of error (on the order of several meters) present in shoreline position and rate-of- change calculations. The techniques presented in this paper, however, provide a means to reduce and quantify these errors so that realistic assessments of the technological noise (as opposed to geological noise) in geographic shoreline positions can be made.
Aircraft flight test trajectory control
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Walker, R. A.
1988-01-01
Two design techniques for linear flight test trajectory controllers (FTTCs) are described: Eigenstructure assignment and the minimum error excitation technique. The two techniques are used to design FTTCs for an F-15 aircraft model for eight different maneuvers at thirty different flight conditions. An evaluation of the FTTCs is presented.
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1973-01-01
The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.
Integrated Data and Control Level Fault Tolerance Techniques for Signal Processing Computer Design
1990-09-01
TOLERANCE TECHNIQUES FOR SIGNAL PROCESSING COMPUTER DESIGN G. Robert Redinbo I. INTRODUCTION High-speed signal processing is an important application of...techniques and mathematical approaches will be expanded later to the situation where hardware errors and roundoff and quantization noise affect all...detect errors equal in number to the degree of g(X), the maximum permitted by the Singleton bound [13]. Real cyclic codes, primarily applicable to
NASA Technical Reports Server (NTRS)
Migneault, Gerard E.
1987-01-01
Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.
Development of an FAA-EUROCONTROL technique for the analysis of human error in ATM : final report.
DOT National Transportation Integrated Search
2002-07-01
Human error has been identified as a dominant risk factor in safety-oriented industries such as air traffic control (ATC). However, little is known about the factors leading to human errors in current air traffic management (ATM) systems. The first s...
Westerik, Janine A. M.; Carter, Victoria; Chrystyn, Henry; Burden, Anne; Thompson, Samantha L.; Ryan, Dermot; Gruffydd-Jones, Kevin; Haughney, John; Roche, Nicolas; Lavorini, Federico; Papi, Alberto; Infantino, Antonio; Roman-Rodriguez, Miguel; Bosnic-Anticevich, Sinthia; Lisspers, Karin; Ställberg, Björn; Henrichsen, Svein Høegh; van der Molen, Thys; Hutton, Catherine; Price, David B.
2016-01-01
Abstract Objective: Correct inhaler technique is central to effective delivery of asthma therapy. The study aim was to identify factors associated with serious inhaler technique errors and their prevalence among primary care patients with asthma using the Diskus dry powder inhaler (DPI). Methods: This was a historical, multinational, cross-sectional study (2011–2013) using the iHARP database, an international initiative that includes patient- and healthcare provider-reported questionnaires from eight countries. Patients with asthma were observed for serious inhaler errors by trained healthcare providers as predefined by the iHARP steering committee. Multivariable logistic regression, stepwise reduced, was used to identify clinical characteristics and asthma-related outcomes associated with ≥1 serious errors. Results: Of 3681 patients with asthma, 623 (17%) were using a Diskus (mean [SD] age, 51 [14]; 61% women). A total of 341 (55%) patients made ≥1 serious errors. The most common errors were the failure to exhale before inhalation, insufficient breath-hold at the end of inhalation, and inhalation that was not forceful from the start. Factors significantly associated with ≥1 serious errors included asthma-related hospitalization the previous year (odds ratio [OR] 2.07; 95% confidence interval [CI], 1.26–3.40); obesity (OR 1.75; 1.17–2.63); poor asthma control the previous 4 weeks (OR 1.57; 1.04–2.36); female sex (OR 1.51; 1.08–2.10); and no inhaler technique review during the previous year (OR 1.45; 1.04–2.02). Conclusions: Patients with evidence of poor asthma control should be targeted for a review of their inhaler technique even when using a device thought to have a low error rate. PMID:26810934
One way Doppler Extractor. Volume 2: Digital VCO technique
NASA Technical Reports Server (NTRS)
Nossen, E. J.; Starner, E. R.
1974-01-01
A feasibility analysis and trade-offs for a one-way Doppler extractor using digital VCO techniques is presented. The method of Doppler measurement involves the use of a digital phase lock loop; once this loop is locked to the incoming signal, the precise frequency and hence the Doppler component can be determined directly from the contents of the digital control register. The only serious error source is due to internally generated noise. Techniques are presented for minimizing this error source and achieving an accuracy of 0.01 Hz in a one second averaging period. A number of digitally controlled oscillators were analyzed from a performance and complexity point of view. The most promising technique uses an arithmetic synthesizer as a digital waveform generator.
High-speed reference-beam-angle control technique for holographic memory drive
NASA Astrophysics Data System (ADS)
Yamada, Ken-ichiro; Ogata, Takeshi; Hosaka, Makoto; Fujita, Koji; Okuyama, Atsushi
2016-09-01
We developed a holographic memory drive for next-generation optical memory. In this study, we present the key technology for achieving a high-speed transfer rate for reproduction, that is, a high-speed control technique for the reference beam angle. In reproduction in a holographic memory drive, there is the issue that the optimum reference beam angle during reproduction varies owing to distortion of the medium. The distortion is caused by, for example, temperature variation, beam irradiation, and moisture absorption. Therefore, a reference-beam-angle control technique to position the reference beam at the optimum angle is crucial. We developed a new optical system that generates an angle-error-signal to detect the optimum reference beam angle. To achieve the high-speed control technique using the new optical system, we developed a new control technique called adaptive final-state control (AFSC) that adds a second control input to the first one derived from conventional final-state control (FSC) at the time of angle-error-signal detection. We established an actual experimental system employing AFSC to achieve moving control between each page (Page Seek) within 300 µs. In sequential multiple Page Seeks, we were able to realize positioning to the optimum angles of the reference beam that maximize the diffracted beam intensity. We expect that applying the new control technique to the holographic memory drive will enable a giga-bit/s-class transfer rate.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Quality assessment and control of finite element solutions
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Babuska, Ivo
1987-01-01
Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe
1992-01-01
This paper presents a strategy for dynamically monitoring digital controllers in the laboratory for susceptibility to electromagnetic disturbances that compromise control integrity. The integrity of digital control systems operating in harsh electromagnetic environments can be compromised by upsets caused by induced transient electrical signals. Digital system upset is a functional error mode that involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. The motivation for this work is the need to develop tools and techniques that can be used in the laboratory to validate and/or certify critical aircraft controllers operating in electromagnetically adverse environments that result from lightning, high-intensity radiated fields (HIRF), and nuclear electromagnetic pulses (NEMP). The detection strategy presented in this paper provides dynamic monitoring of a given control computer for degraded functional integrity resulting from redundancy management errors, control calculation errors, and control correctness/effectiveness errors. In particular, this paper discusses the use of Kalman filtering, data fusion, and statistical decision theory in monitoring a given digital controller for control calculation errors.
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.; Vallely, D. P.
1978-01-01
This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.
Anticipatory Neurofuzzy Control
NASA Technical Reports Server (NTRS)
Mccullough, Claire L.
1994-01-01
Technique of feedback control, called "anticipatory neurofuzzy control," developed for use in controlling flexible structures and other dynamic systems for which mathematical models of dynamics poorly known or unknown. Superior ability to act during operation to compensate for, and adapt to, errors in mathematical model of dynamics, changes in dynamics, and noise. Also offers advantage of reduced computing time. Hybrid of two older fuzzy-logic control techniques: standard fuzzy control and predictive fuzzy control.
NASA Astrophysics Data System (ADS)
Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin
2017-02-01
This paper exploits an error tracking control method for overhead crane systems for which the error trajectories for the trolley and the payload swing can be pre-specified. The proposed method does not require that the initial payload swing angle remains zero, whereas this requirement is usually assumed in conventional methods. The significant feature of the proposed method is its superior control performance as well as its strong robustness over different or uncertain rope lengths, payload masses, desired positions, initial payload swing angles, and external disturbances. Owing to the same attenuation behavior, the desired error trajectory for the trolley for each traveling distance is not needed to be reset, which is easy to implement in practical applications. By converting the error tracking overhead crane dynamics to the objective system, we obtain the error tracking control law for arbitrary initial payload swing angles. Lyapunov techniques and LaSalle's invariance theorem are utilized to prove the convergence and stability of the closed-loop system. Simulation and experimental results are illustrated to validate the superior performance of the proposed error tracking control method.
Adaptive data rate control TDMA systems as a rain attenuation compensation technique
NASA Technical Reports Server (NTRS)
Sato, Masaki; Wakana, Hiromitsu; Takahashi, Takashi; Takeuchi, Makoto; Yamamoto, Minoru
1993-01-01
Rainfall attenuation has a severe effect on signal strength and impairs communication links for future mobile and personal satellite communications using Ka-band and millimeter wave frequencies. As rain attenuation compensation techniques, several methods such as uplink power control, site diversity, and adaptive control of data rate or forward error correction have been proposed. In this paper, we propose a TDMA system that can compensate rain attenuation by adaptive control of transmission rates. To evaluate the performance of this TDMA terminal, we carried out three types of experiments: experiments using a Japanese CS-3 satellite with Ka-band transponders, in house IF loop-back experiments, and computer simulations. Experimental results show that this TDMA system has advantages over the conventional constant-rate TDMA systems, as resource sharing technique, in both bit error rate and total TDMA burst lengths required for transmitting given information.
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
Integrator Windup Protection-Techniques and a STOVL Aircraft Engine Controller Application
NASA Technical Reports Server (NTRS)
KrishnaKumar, K.; Narayanaswamy, S.
1997-01-01
Integrators are included in the feedback loop of a control system to eliminate the steady state errors in the commanded variables. The integrator windup problem arises if the control actuators encounter operational limits before the steady state errors are driven to zero by the integrator. The typical effects of windup are large system oscillations, high steady state error, and a delayed system response following the windup. In this study, methods to prevent the integrator windup are examined to provide Integrator Windup Protection (IW) for an engine controller of a Short Take-Off and Vertical Landing (STOVL) aircraft. An unified performance index is defined to optimize the performance of the Conventional Anti-Windup (CAW) and the Modified Anti-Windup (MAW) methods. A modified Genetic Algorithm search procedure with stochastic parameter encoding is implemented to obtain the optimal parameters of the CAW scheme. The advantages and drawbacks of the CAW and MAW techniques are discussed and recommendations are made for the choice of the IWP scheme, given some characteristics of the system.
Basheti, Iman A; Obeidat, Nathir M; Reddel, Helen K
2017-02-09
Inhaler technique can be corrected with training, but skills drop off quickly without repeated training. The aim of our study was to explore the effect of novel inhaler technique labels on the retention of correct inhaler technique. In this single-blind randomized parallel-group active-controlled study, clinical pharmacists enrolled asthma patients using controller medication by Accuhaler [Diskus] or Turbuhaler. Inhaler technique was assessed using published checklists (score 0-9). Symptom control was assessed by asthma control test. Patients were randomized into active (ACCa; THa) and control (ACCc; THc) groups. All patients received a "Show-and-Tell" inhaler technique counseling service. Active patients also received inhaler labels highlighting their initial errors. Baseline data were available for 95 patients, 68% females, mean age 44.9 (SD 15.2) years. Mean inhaler scores were ACCa:5.3 ± 1.0; THa:4.7 ± 0.9, ACCc:5.5 ± 1.1; THc:4.2 ± 1.0. Asthma was poorly controlled (mean ACT scores ACCa:13.9 ± 4.3; THa:12.1 ± 3.9; ACCc:12.7 ± 3.3; THc:14.3 ± 3.7). After training, all patients had correct technique (score 9/9). After 3 months, there was significantly less decline in inhaler technique scores for active than control groups (mean difference: Accuhaler -1.04 (95% confidence interval -1.92, -0.16, P = 0.022); Turbuhaler -1.61 (-2.63, -0.59, P = 0.003). Symptom control improved significantly, with no significant difference between active and control patients, but active patients used less reliever medication (active 2.19 (SD 1.78) vs. control 3.42 (1.83) puffs/day, P = 0.002). After inhaler training, novel inhaler technique labels improve retention of correct inhaler technique skills with dry powder inhalers. Inhaler technique labels represent a simple, scalable intervention that has the potential to extend the benefit of inhaler training on asthma outcomes. REMINDER LABELS IMPROVE INHALER TECHNIQUE: Personalized labels on asthma inhalers remind patients of correct technique and help improve symptoms over time. Iman Basheti at the Applied Science Private University in Jordan and co-workers trialed the approach of placing patient-specific reminder labels on dry-powder asthma inhalers to improve long-term technique. Poor asthma control is often exacerbated by patients making mistakes when using their inhalers. During the trial, 95 patients received inhaler training before being split into two groups: the control group received no further help, while the other group received individualized labels on their inhalers reminding them of their initial errors. After three months, 67% of patients with reminder labels retained correct technique compared to only 12% of controls. They also required less reliever medication and reported improved symptoms. This represents a simple, cheap way of tackling inhaler technique errors.
High-density force myography: A possible alternative for upper-limb prosthetic control.
Radmand, Ashkan; Scheme, Erik; Englehart, Kevin
2016-01-01
Several multiple degree-of-freedom upper-limb prostheses that have the promise of highly dexterous control have recently been developed. Inadequate controllability, however, has limited adoption of these devices. Introducing more robust control methods will likely result in higher acceptance rates. This work investigates the suitability of using high-density force myography (HD-FMG) for prosthetic control. HD-FMG uses a high-density array of pressure sensors to detect changes in the pressure patterns between the residual limb and socket caused by the contraction of the forearm muscles. In this work, HD-FMG outperforms the standard electromyography (EMG)-based system in detecting different wrist and hand gestures. With the arm in a fixed, static position, eight hand and wrist motions were classified with 0.33% error using the HD-FMG technique. Comparatively, classification errors in the range of 2.2%-11.3% have been reported in the literature for multichannel EMG-based approaches. As with EMG, position variation in HD-FMG can introduce classification error, but incorporating position variation into the training protocol reduces this effect. Channel reduction was also applied to the HD-FMG technique to decrease the dimensionality of the problem as well as the size of the sensorized area. We found that with informed, symmetric channel reduction, classification error could be decreased to 0.02%.
Robust dynamical decoupling for quantum computing and quantum memory.
Souza, Alexandre M; Alvarez, Gonzalo A; Suter, Dieter
2011-06-17
Dynamical decoupling (DD) is a popular technique for protecting qubits from the environment. However, unless special care is taken, experimental errors in the control pulses used in this technique can destroy the quantum information instead of preserving it. Here, we investigate techniques for making DD sequences robust against different types of experimental errors while retaining good decoupling efficiency in a fluctuating environment. We present experimental data from solid-state nuclear spin qubits and introduce a new DD sequence that is suitable for quantum computing and quantum memory.
The failures of root canal preparation with hand ProTaper.
Bătăiosu, Marilena; Diaconu, Oana; Moraru, Iren; Dăguci, C; Tuculină, Mihaela; Dăguci, Luminiţa; Gheorghiţă, Lelia
2012-07-01
The failures of root canal preparation are due to some anatomical deviation (canal in "C" or "S") and some technique errors. The technique errors are usually present in canal root cleansing and shaping stage and are the result of endodontic treatment objectives deviation. Our study was made on technique errors while preparing the canal roots with hand ProTaper. Our study was made "in vitro" on 84 extracted teeth (molars, premolars, incisors and canines). The canal root of these teeth were cleansed and shaped with hand ProTaper by crown-down technique and canal irrigation with NaOCl(2,5%). The dental preparation control was made by X-ray. During canal root preparation some failures were observed like: canal root overinstrumentation, zipping and stripping phenomenon, discarded and/or fractured instruments. Hand ProTaper represents a revolutionary progress of endodontic treatment, but a deviation from accepted rules of canal root instrumentation can lead to failures of endodontic treatment.
Feedback control laws for highly maneuverable aircraft
NASA Technical Reports Server (NTRS)
Garrard, William L.; Balas, Gary J.
1994-01-01
During the first half of the year, the investigators concentrated their efforts on completing the design of control laws for the longitudinal axis of the HARV. During the second half of the year they concentrated on the synthesis of control laws for the lateral-directional axes. The longitudinal control law design efforts can be briefly summarized as follows. Longitudinal control laws were developed for the HARV using mu synthesis design techniques coupled with dynamic inversion. An inner loop dynamic inversion controller was used to simplify the system dynamics by eliminating the aerodynamic nonlinearities and inertial cross coupling. Models of the errors resulting from uncertainties in the principal longitudinal aerodynamic terms were developed and included in the model of the HARV with the inner loop dynamic inversion controller. This resulted in an inner loop transfer function model which was an integrator with the modeling errors characterized as uncertainties in gain and phase. Outer loop controllers were then designed using mu synthesis to provide robustness to these modeling errors and give desired response to pilot inputs. Both pitch rate and angle of attack command following systems were designed. The following tasks have been accomplished for the lateral-directional controllers: inner and outer loop dynamic inversion controllers have been designed; an error model based on a linearized perturbation model of the inner loop system was derived; controllers for the inner loop system have been designed, using classical techniques, that control roll rate and Dutch roll response; the inner loop dynamic inversion and classical controllers have been implemented on the six degree of freedom simulation; and lateral-directional control allocation scheme has been developed based on minimizing required control effort.
Reliability issues in active control of large flexible space structures
NASA Technical Reports Server (NTRS)
Vandervelde, W. E.
1986-01-01
Efforts in this reporting period were centered on four research tasks: design of failure detection filters for robust performance in the presence of modeling errors, design of generalized parity relations for robust performance in the presence of modeling errors, design of failure sensitive observers using the geometric system theory of Wonham, and computational techniques for evaluation of the performance of control systems with fault tolerance and redundancy management
Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.
Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan
2018-04-01
In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.
Synthesis of robust nonlinear autopilots using differential game theory
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1991-01-01
A synthesis technique for handling unmodeled disturbances in nonlinear control law synthesis was advanced using differential game theory. Two types of modeling inaccuracies can be included in the formulation. The first is a bias-type error, while the second is the scale-factor-type error in the control variables. The disturbances were assumed to satisfy an integral inequality constraint. Additionally, it was assumed that they act in such a way as to maximize a quadratic performance index. Expressions for optimal control and worst-case disturbance were then obtained using optimal control theory.
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
Estimation of the Doppler centroid ambiguity is a necessary element of the signal processing for SAR systems with large antenna pointing errors. Without proper resolution of the Doppler centroid estimation (DCE) ambiguity, the image quality will be degraded in the system impulse response function and the geometric fidelity. Two techniques for resolution of DCE ambiguity for the spaceborne SAR are presented; they include a brief review of the range cross-correlation technique and presentation of a new technique using multiple pulse repetition frequencies (PRFs). For SAR systems, where other performance factors control selection of the PRF's, an algorithm is devised to resolve the ambiguity that uses PRF's of arbitrary numerical values. The performance of this multiple PRF technique is analyzed based on a statistical error model. An example is presented that demonstrates for the Shuttle Imaging Radar-C (SIR-C) C-band SAR, the probability of correct ambiguity resolution is higher than 95 percent for antenna attitude errors as large as 3 deg.
A method of treating the non-grey error in total emittance measurements
NASA Technical Reports Server (NTRS)
Heaney, J. B.; Henninger, J. H.
1971-01-01
In techniques for the rapid determination of total emittance, the sample is generally exposed to surroundings that are at a different temperature than the sample's surface. When the infrared spectral reflectance of the surface is spectrally selective, these techniques introduce an error into the total emittance values. Surfaces of aluminum overcoated with oxides of various thicknesses fall into this class. Because they are often used as temperature control coatings on satellites, their emittances must be accurately known. The magnitude of the error was calculated for Alzak and silicon oxide-coated aluminum and was shown to be dependent on the thickness of the oxide coating. The results demonstrate that, because the magnitude of the error is thickness-dependent, it is generally impossible or impractical to eliminate it by calibrating the measuring device.
ERIC Educational Resources Information Center
von Davier, Alina A.
2012-01-01
Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of…
Research on error control and compensation in magnetorheological finishing.
Dai, Yifan; Hu, Hao; Peng, Xiaoqiang; Wang, Jianmin; Shi, Feng
2011-07-01
Although magnetorheological finishing (MRF) is a deterministic finishing technology, the machining results always fall short of simulation precision in the actual process, and it cannot meet the precision requirements just through a single treatment but after several iterations. We investigate the reasons for this problem through simulations and experiments. Through controlling and compensating the chief errors in the manufacturing procedure, such as removal function calculation error, positioning error of the removal function, and dynamic performance limitation of the CNC machine, the residual error convergence ratio (ratio of figure error before and after processing) in a single process is obviously increased, and higher figure precision is achieved. Finally, an improved technical process is presented based on these researches, and the verification experiment is accomplished on the experimental device we developed. The part is a circular plane mirror of fused silica material, and the surface figure error is improved from the initial λ/5 [peak-to-valley (PV) λ=632.8 nm], λ/30 [root-mean-square (rms)] to the final λ/40 (PV), λ/330 (rms) just through one iteration in 4.4 min. Results show that a higher convergence ratio and processing precision can be obtained by adopting error control and compensation techniques in MRF.
Schultze, A E; Irizarry, A R
2017-02-01
Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.
System identification for modeling for control of flexible structures
NASA Technical Reports Server (NTRS)
Mettler, Edward; Milman, Mark
1986-01-01
The major components of a design and operational flight strategy for flexible structure control systems are presented. In this strategy an initial distributed parameter control design is developed and implemented from available ground test data and on-orbit identification using sophisticated modeling and synthesis techniques. The reliability of this high performance controller is directly linked to the accuracy of the parameters on which the design is based. Because uncertainties inevitably grow without system monitoring, maintaining the control system requires an active on-line system identification function to supply parameter updates and covariance information. Control laws can then be modified to improve performance when the error envelopes are decreased. In terms of system safety and stability the covariance information is of equal importance as the parameter values themselves. If the on-line system ID function detects an increase in parameter error covariances, then corresponding adjustments must be made in the control laws to increase robustness. If the error covariances exceed some threshold, an autonomous calibration sequence could be initiated to restore the error enveloped to an acceptable level.
Model reference tracking control of an aircraft: a robust adaptive approach
NASA Astrophysics Data System (ADS)
Tanyer, Ilker; Tatlicioglu, Enver; Zergeroglu, Erkan
2017-05-01
This work presents the design and the corresponding analysis of a nonlinear robust adaptive controller for model reference tracking of an aircraft that has parametric uncertainties in its system matrices and additive state- and/or time-dependent nonlinear disturbance-like terms in its dynamics. Specifically, robust integral of the sign of the error feedback term and an adaptive term is fused with a proportional integral controller. Lyapunov-based stability analysis techniques are utilised to prove global asymptotic convergence of the output tracking error. Extensive numerical simulations are presented to illustrate the performance of the proposed robust adaptive controller.
Finite-time control for nonlinear spacecraft attitude based on terminal sliding mode technique.
Song, Zhankui; Li, Hongxing; Sun, Kaibiao
2014-01-01
In this paper, a fast terminal sliding mode control (FTSMC) scheme with double closed loops is proposed for the spacecraft attitude control. The FTSMC laws are included both in an inner control loop and an outer control loop. Firstly, a fast terminal sliding surface (FTSS) is constructed, which can drive the inner loop tracking-error and the outer loop tracking-error on the FTSS to converge to zero in finite time. Secondly, FTSMC strategy is designed by using Lyaponov's method for ensuring the occurrence of the sliding motion in finite time, which can hold the character of fast transient response and improve the tracking accuracy. It is proved that FTSMC can guarantee the convergence of tracking-error in both approaching and sliding mode surface. Finally, simulation results demonstrate the effectiveness of the proposed control scheme. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Control techniques to improve Space Shuttle solid rocket booster separation
NASA Technical Reports Server (NTRS)
Tomlin, D. D.
1983-01-01
The present Space Shuttle's control system does not prevent the Orbiter's main engines from being in gimbal positions that are adverse to solid rocket booster separation. By eliminating the attitude error and attitude rate feedback just prior to solid rocket booster separation, the detrimental effects of the Orbiter's main engines can be reduced. In addition, if angular acceleration feedback is applied, the gimbal torques produced by the Orbiter's engines can reduce the detrimental effects of the aerodynamic torques. This paper develops these control techniques and compares the separation capability of the developed control systems. Currently with the worst case initial conditions and each Shuttle system dispersion aligned in the worst direction (which is more conservative than will be experienced in flight), the solid rocket booster has an interference with the Shuttle's external tank of 30 in. Elimination of the attitude error and attitude rate feedback reduces that interference to 19 in. Substitution of angular acceleration feedback reduces the interference to 6 in. The two latter interferences can be eliminated by atess conservative analysis techniques, that is, by using a root sum square of the system dispersions.
Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung
2007-03-01
The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.
NASA Astrophysics Data System (ADS)
Schreiber, K. Ulrich; Kodet, Jan
2018-02-01
Highly precise time and stable reference frequencies are fundamental requirements for space geodesy. Satellite laser ranging (SLR) is one of these techniques, which differs from all other applications like Very Long Baseline Interferometry (VLBI), Global Navigation Satellite Systems (GNSS) and finally Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) by the fact that it is an optical two-way measurement technique. That means that there is no need for a clock synchronization process between both ends of the distance covered by the measurement technique. Under the assumption of isotropy for the speed of light, SLR establishes the only practical realization of the Einstein Synchronization process so far. Therefore it is a powerful time transfer technique. However, in order to transfer time between two remote clocks, it is also necessary to tightly control all possible signal delays in the ranging process. This paper discusses the role of time and frequency in SLR as well as the error sources before it address the transfer of time between ground and space. The need of an improved signal delay control led to a major redesign of the local time and frequency distribution at the Geodetic Observatory Wettzell. Closure measurements can now be used to identify and remove systematic errors in SLR measurements.
The failures of root canal preparation with hand ProTaper
Bătăiosu, Marilena; Diaconu, Oana; Moraru, Iren; Dăguci, C.; Ţuculină, Mihaela; Dăguci, Luminiţa; Gheorghiţă, Lelia
2012-01-01
The failures of root canal preparation are due to some anatomical deviation (canal in “C” or “S”) and some technique errors. The technique errors are usually present in canal root cleansing and shaping stage and are the result of endodontic treatment objectives deviation. Objectives: Our study was made on technique errors while preparing the canal roots with hand ProTaper. Methodology: Our study was made “in vitro” on 84 extracted teeth (molars, premolars, incisors and canines). The canal root of these teeth were cleansed and shaped with hand ProTaper by crown-down technique and canal irrigation with NaOCl(2,5%). The dental preparation control was made by X-ray. Results: During canal root preparation some failures were observed like: canal root overinstrumentation, zipping and stripping phenomenon, discarded and/or fractured instruments. Conclusions: Hand ProTaper represents a revolutionary progress of endodontic treatment, but a deviation from accepted rules of canal root instrumentation can lead to failures of endodontic treatment. PMID:24778848
1986-06-01
nonlinear form and account for uncertainties in model parameters, structural simplifications of the model, and disturbances. This technique summarizes...SHARPS system. *The take into account the coupling between axes two curves are nearly identical, except that the without becoming unwieldy. The low...are mainly caused by errors and control errors and accounts for the bandwidth limitations and the simulated current. observed offsets. The overshoot
Error field optimization in DIII-D using extremum seeking control
NASA Astrophysics Data System (ADS)
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; Humphreys, D. A.; Eidietis, N.; Hanson, J. M.; Paz-Soldan, C.; Strait, E. J.; Walker, M. L.
2016-07-01
DIII-D experiments have demonstrated a new real-time approach to tokamak error field control based on maximizing the toroidal angular momentum. This approach uses extremum seeking control theory to optimize the error field in real time without inducing instabilities. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coil currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.
Passive quantum error correction of linear optics networks through error averaging
NASA Astrophysics Data System (ADS)
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
NASA Technical Reports Server (NTRS)
Gubarev, Mikhail V.; Kilaru, Kirenmayee; Ramsey, Brian D.
2009-01-01
We are investigating differential deposition as a way of correcting small figure errors inside full-shell grazing-incidence x-ray optics. The optics in our study are fabricated using the electroformed-nickel-replication technique, and the figure errors arise from fabrication errors in the mandrel, from which the shells are replicated, as well as errors induced during the electroforming process. Combined, these give sub-micron-scale figure deviations which limit the angular resolution of the optics to approx. 10 arcsec. Sub-micron figure errors can be corrected by selectively depositing (physical vapor deposition) material inside the shell. The requirements for this filler material are that it must not degrade the ultra-smooth surface finish necessary for efficient x-ray reflection (approx. 5 A rms), and must not be highly stressed. In addition, a technique must be found to produce well controlled and defined beams within highly constrained geometries, as some of our mirror shells are less than 3 cm in diameter.
NASA Astrophysics Data System (ADS)
Laforest, Martin
Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for single and multi qubit systems. Even though liquid state NMR is argued to be unsuitable for scalable quantum information processing, it remains the best test-bed system to experimentally implement, verify and develop protocols aimed at increasing the control over general quantum information processors. For this reason, all the protocols described in this thesis have been implemented in liquid state NMR, which then led to further development of control and analysis techniques.
Tong, Shao Cheng; Li, Yong Ming; Zhang, Hua-Guang
2011-07-01
In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of "explosion of complexity" inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches.
Monitoring robot actions for error detection and recovery
NASA Technical Reports Server (NTRS)
Gini, M.; Smith, R.
1987-01-01
Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
NASA Technical Reports Server (NTRS)
Edwards, M. H.; Arvidson, R. E.; Guinness, E. A.
1984-01-01
The problem of displaying information on the seafloor morphology is attacked by utilizing digital image processing techniques to generate images for Seabeam data covering three young seamounts on the eastern flank of the East Pacific Rise. Errors in locations between crossing tracks are corrected by interactively identifying features and translating tracks relative to a control track. Spatial interpolation techniques using moving averages are used to interpolate between gridded depth values to produce images in shaded relief and color-coded forms. The digitally processed images clarify the structural control on seamount growth and clearly show the lateral extent of volcanic materials, including the distribution and fault control of subsidiary volcanic constructional features. The image presentations also clearly show artifacts related to both residual navigational errors and to depth or location differences that depend on ship heading relative to slope orientation in regions with steep slopes.
NASA Technical Reports Server (NTRS)
Callantine, Todd J.
2002-01-01
This report describes preliminary research on intelligent agents that make errors. Such agents are crucial to the development of novel agent-based techniques for assessing system safety. The agents extend an agent architecture derived from the Crew Activity Tracking System that has been used as the basis for air traffic controller agents. The report first reviews several error taxonomies. Next, it presents an overview of the air traffic controller agents, then details several mechanisms for causing the agents to err in realistic ways. The report presents a performance assessment of the error-generating agents, and identifies directions for further research. The research was supported by the System-Wide Accident Prevention element of the FAA/NASA Aviation Safety Program.
Development of a digital automatic control law for steep glideslope capture and flare
NASA Technical Reports Server (NTRS)
Halyo, N.
1977-01-01
A longitudinal digital guidance and control law for steep glideslopes using MLS (Microwave Landing System) data is developed for CTOL aircraft using modern estimation and control techniques. The control law covers the final approach phases of glideslope capture, glideslope tracking, and flare to touchdown for automatic landings under adverse weather conditions. The control law uses a constant gain Kalman filter to process MLS and body-mounted accelerometer data to form estimates of flight path errors and wind velocities including wind shear. The flight path error estimates and wind estimates are used for feedback in generating control surface commands. Results of a digital simulation of the aircraft dynamics and the guidance and control law are presented for various wind conditions.
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2016-05-01
Quality control is critical to manufacturing. Frequently, techniques are used to define object conformity bounds, based on historical quality data. This paper considers techniques for bespoke and small batch jobs that are not statistical model based. These techniques also serve jobs where 100% validation is needed due to the mission or safety critical nature of particular parts. One issue with this type of system is alignment discrepancies between the generated model and the physical part. This paper discusses and evaluates techniques for characterizing and correcting alignment issues between the projected and perceived data sets to prevent errors attributable to misalignment.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Optimized method for manufacturing large aspheric surfaces
NASA Astrophysics Data System (ADS)
Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui
2007-12-01
Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhang, W J
2012-10-01
The trajectory tracking problem of a closed-chain five-bar robot is studied in this paper. Based on an error transformation function and the backstepping technique, an approximation-based tracking algorithm is proposed, which can guarantee the control performance of the robotic system in both the stable and transient phases. In particular, the overshoot, settling time, and final tracking error of the robotic system can be all adjusted by properly setting the parameters in the error transformation function. The radial basis function neural network (RBFNN) is used to compensate the complicated nonlinear terms in the closed-loop dynamics of the robotic system. The approximation error of the RBFNN is only required to be bounded, which simplifies the initial "trail-and-error" configuration of the neural network. Illustrative examples are given to verify the theoretical analysis and illustrate the effectiveness of the proposed algorithm. Finally, it is also shown that the proposed approximation-based controller can be simplified by a smart mechanical design of the closed-chain robot, which demonstrates the promise of the integrated design and control philosophy.
Maaoui-Ben Hassine, Ikram; Naouar, Mohamed Wissem; Mrabet-Bellaaj, Najiba
2016-05-01
In this paper, Model Predictive Control and Dead-beat predictive control strategies are proposed for the control of a PMSG based wind energy system. The proposed MPC considers the model of the converter-based system to forecast the possible future behavior of the controlled variables. It allows selecting the voltage vector to be applied that leads to a minimum error by minimizing a predefined cost function. The main features of the MPC are low current THD and robustness against parameters variations. The Dead-beat predictive control is based on the system model to compute the optimum voltage vector that ensures zero-steady state error. The optimum voltage vector is then applied through Space Vector Modulation (SVM) technique. The main advantages of the Dead-beat predictive control are low current THD and constant switching frequency. The proposed control techniques are presented and detailed for the control of back-to-back converter in a wind turbine system based on PMSG. Simulation results (under Matlab-Simulink software environment tool) and experimental results (under developed prototyping platform) are presented in order to show the performances of the considered control strategies. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Design of an optimal preview controller for linear discrete-time descriptor systems with state delay
NASA Astrophysics Data System (ADS)
Cao, Mengjuan; Liao, Fucheng
2015-04-01
In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.
Phase Retrieval for Radio Telescope and Antenna Control
NASA Technical Reports Server (NTRS)
Dean, Bruce
2011-01-01
Phase-retrieval is a general term used in optics to describe the estimation of optical imperfections or "aberrations." The purpose of this innovation is to develop the application of phase retrieval to radio telescope and antenna control in the millimeter wave band. Earlier techniques do not approximate the incoherent subtraction process as a coherent propagation. This approximation reduces the noise in the data and allows a straightforward application of conventional phase retrieval techniques for radio telescope and antenna control. The application of iterative-transform phase retrieval to radio telescope and antenna control is made by approximating the incoherent subtraction process as a coherent propagation. Thus, for systems utilizing both positive and negative polarity feeds, this approximation allows both surface and alignment errors to be assessed without the use of additional hardware or laser metrology. Knowledge of the antenna surface profile allows errors to be corrected at a given surface temperature and observing angle. In addition to imperfections of the antenna surface figure, the misalignment of multiple antennas operating in unison can reduce or degrade the signal-to-noise ratio of the received or broadcast signals. This technique also has application to the alignment of antenna array configurations.
Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1998-01-01
In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.
2012-03-20
The Food and Drug Administration (FDA) is amending the packaging and labeling control provisions of the current good manufacturing practice (CGMP) regulations for human and veterinary drug products by limiting the application of special control procedures for the use of cut labeling to immediate container labels, individual unit cartons, or multiunit cartons containing immediate containers that are not packaged in individual unit cartons. FDA is also permitting the use of any automated technique, including differentiation by labeling size and shape, that physically prevents incorrect labeling from being processed by labeling and packaging equipment when cut labeling is used. This action is intended to protect consumers from labeling errors more likely to cause adverse health consequences, while eliminating the regulatory burden of applying the rule to labeling unlikely to reach or adversely affect consumers. This action is also intended to permit manufacturers to use a broader range of error prevention and labeling control techniques than permitted by current CGMPs.
Editing of EIA coded, numerically controlled, machine tool tapes
NASA Technical Reports Server (NTRS)
Weiner, J. M.
1975-01-01
Editing of numerically controlled (N/C) machine tool tapes (8-level paper tape) using an interactive graphic display processor is described. A rapid technique required for correcting production errors in N/C tapes was developed using the interactive text editor on the IMLAC PDS-ID graphic display system and two special programs resident on disk. The correction technique and special programs for processing N/C tapes coded to EIA specifications are discussed.
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.
Design Of Combined Stochastic Feedforward/Feedback Control
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1989-01-01
Methodology accommodates variety of control structures and design techniques. In methodology for combined stochastic feedforward/feedback control, main objectives of feedforward and feedback control laws seen clearly. Inclusion of error-integral feedback, dynamic compensation, rate-command control structure, and like integral element of methodology. Another advantage of methodology flexibility to develop variety of techniques for design of feedback control with arbitrary structures to obtain feedback controller: includes stochastic output feedback, multiconfiguration control, decentralized control, or frequency and classical control methods. Control modes of system include capture and tracking of localizer and glideslope, crab, decrab, and flare. By use of recommended incremental implementation, control laws simulated on digital computer and connected with nonlinear digital simulation of aircraft and its systems.
Active Mirror Predictive and Requirements Verification Software (AMP-ReVS)
NASA Technical Reports Server (NTRS)
Basinger, Scott A.
2012-01-01
This software is designed to predict large active mirror performance at various stages in the fabrication lifecycle of the mirror. It was developed for 1-meter class powered mirrors for astronomical purposes, but is extensible to other geometries. The package accepts finite element model (FEM) inputs and laboratory measured data for large optical-quality mirrors with active figure control. It computes phenomenological contributions to the surface figure error using several built-in optimization techniques. These phenomena include stresses induced in the mirror by the manufacturing process and the support structure, the test procedure, high spatial frequency errors introduced by the polishing process, and other process-dependent deleterious effects due to light-weighting of the mirror. Then, depending on the maturity of the mirror, it either predicts the best surface figure error that the mirror will attain, or it verifies that the requirements for the error sources have been met once the best surface figure error has been measured. The unique feature of this software is that it ties together physical phenomenology with wavefront sensing and control techniques and various optimization methods including convex optimization, Kalman filtering, and quadratic programming to both generate predictive models and to do requirements verification. This software combines three distinct disciplines: wavefront control, predictive models based on FEM, and requirements verification using measured data in a robust, reusable code that is applicable to any large optics for ground and space telescopes. The software also includes state-of-the-art wavefront control algorithms that allow closed-loop performance to be computed. It allows for quantitative trade studies to be performed for optical systems engineering, including computing the best surface figure error under various testing and operating conditions. After the mirror manufacturing process and testing have been completed, the software package can be used to verify that the underlying requirements have been met.
Williams, Larry J; O'Boyle, Ernest H
2015-09-01
A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).
Li, Le-Bao; Sun, Ling-Ling; Zhang, Sheng-Zhou; Yang, Qing-Quan
2015-09-01
A new control approach for speed tracking and synchronization of multiple motors is developed, by incorporating an adaptive sliding mode control (ASMC) technique into a ring coupling synchronization control structure. This control approach can stabilize speed tracking of each motor and synchronize its motion with other motors' motion so that speed tracking errors and synchronization errors converge to zero. Moreover, an adaptive law is exploited to estimate the unknown bound of uncertainty, which is obtained in the sense of Lyapunov stability theorem to minimize the control effort and attenuate chattering. Performance comparisons with parallel control, relative coupling control and conventional PI control are investigated on a four-motor synchronization control system. Extensive simulation results show the effectiveness of the proposed control scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Measurement Techniques for Transmit Source Clock Jitter for Weak Serial RF Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Schlesinger, Adam M.
2010-01-01
Techniques for filtering clock jitter measurements are developed, in the context of controlling data modulation jitter on an RF carrier to accommodate low signal-to-noise ratio thresholds of high-performance error correction codes. Measurement artifacts from sampling are considered, and a tutorial on interpretation of direct readings is included.
Intergration of system identification and robust controller designs for flexible structures in space
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Lew, Jiann-Shiun
1990-01-01
An approach is developed using experimental data to identify a reduced-order model and its model error for a robust controller design. There are three steps involved in the approach. First, an approximately balanced model is identified using the Eigensystem Realization Algorithm, which is an identification algorithm. Second, the model error is calculated and described in frequency domain in terms of the H(infinity) norm. Third, a pole placement technique in combination with a H(infinity) control method is applied to design a controller for the considered system. A set experimental data from an existing setup, namely the Mini-Mast system, is used to illustrate and verify the approach.
NASA Technical Reports Server (NTRS)
Mohr, R. L.
1975-01-01
A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Bandwidth controller for phase-locked-loop
NASA Technical Reports Server (NTRS)
Brockman, Milton H. (Inventor)
1992-01-01
A phase locked loop utilizing digital techniques to control the closed loop bandwidth of the RF carrier phase locked loop in a receiver provides high sensitivity and a wide dynamic range for signal reception. After analog to digital conversion, a digital phase locked loop bandwidth controller provides phase error detection with automatic RF carrier closed loop tracking bandwidth control to accommodate several modes of transmission.
Li, Zhaoying; Zhou, Wenjie; Liu, Hao
2016-09-01
This paper addresses the nonlinear robust tracking controller design problem for hypersonic vehicles. This problem is challenging due to strong coupling between the aerodynamics and the propulsion system, and the uncertainties involved in the vehicle dynamics including parametric uncertainties, unmodeled model uncertainties, and external disturbances. By utilizing the feedback linearization technique, a linear tracking error system is established with prescribed references. For the linear model, a robust controller is proposed based on the signal compensation theory to guarantee that the tracking error dynamics is robustly stable. Numerical simulation results are given to show the advantages of the proposed nonlinear robust control method, compared to the robust loop-shaping control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Dynamic performance of an aero-assist spacecraft - AFE
NASA Technical Reports Server (NTRS)
Chang, Ho-Pen; French, Raymond A.
1992-01-01
Dynamic performance of the Aero-assist Flight Experiment (AFE) spacecraft was investigated using a high-fidelity 6-DOF simulation model. Baseline guidance logic, control logic, and a strapdown navigation system to be used on the AFE spacecraft are also modeled in the 6-DOF simulation. During the AFE mission, uncertainties in the environment and the spacecraft are described by an error space which includes both correlated and uncorrelated error sources. The principal error sources modeled in this study include navigation errors, initial state vector errors, atmospheric variations, aerodynamic uncertainties, center-of-gravity off-sets, and weight uncertainties. The impact of the perturbations on the spacecraft performance is investigated using Monte Carlo repetitive statistical techniques. During the Solid Rocket Motor (SRM) deorbit phase, a target flight path angle of -4.76 deg at entry interface (EI) offers very high probability of avoiding SRM casing skip-out from the atmosphere. Generally speaking, the baseline designs of the guidance, navigation, and control systems satisfy most of the science and mission requirements.
Artificial neural network implementation of a near-ideal error prediction controller
NASA Technical Reports Server (NTRS)
Mcvey, Eugene S.; Taylor, Lynore Denise
1992-01-01
A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error responses be known for a particular input and modeled plant. These responses are used in the error prediction controller. An analysis was done on the general dynamic behavior that results from including a digital error predictor in a control loop and these were compared to those including the near-ideal Neural Network error predictor. This analysis was done for a second and third order system.
Control of Systems With Slow Actuators Using Time Scale Separation
NASA Technical Reports Server (NTRS)
Stepanyan, Vehram; Nguyen, Nhan
2009-01-01
This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.
A Test Reliability Analysis of an Abbreviated Version of the Pupil Control Ideology Form.
ERIC Educational Resources Information Center
Gaffney, Patrick V.
A reliability analysis was conducted of an abbreviated, 10-item version of the Pupil Control Ideology Form (PCI), using the Cronbach's alpha technique (L. J. Cronbach, 1951) and the computation of the standard error of measurement. The PCI measures a teacher's orientation toward pupil control. Subjects were 168 preservice teachers from one private…
Mesh refinement strategy for optimal control problems
NASA Astrophysics Data System (ADS)
Paiva, L. T.; Fontes, F. A. C. C.
2013-10-01
Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.
A knowledge-based system design/information tool for aircraft flight control systems
NASA Technical Reports Server (NTRS)
Mackall, Dale A.; Allen, James G.
1991-01-01
Research aircraft have become increasingly dependent on advanced electronic control systems to accomplish program goals. These aircraft are integrating multiple disciplines to improve performance and satisfy research objective. This integration is being accomplished through electronic control systems. Systems design methods and information management have become essential to program success. The primary objective of the system design/information tool for aircraft flight control is to help transfer flight control system design knowledge to the flight test community. By providing all of the design information and covering multiple disciplines in a structured, graphical manner, flight control systems can more easily be understood by the test engineers. This will provide the engineers with the information needed to thoroughly ground test the system and thereby reduce the likelihood of serious design errors surfacing in flight. The secondary object is to apply structured design techniques to all of the design domains. By using the techniques in the top level system design down through the detailed hardware and software designs, it is hoped that fewer design anomalies will result. The flight test experiences are reviewed of three highly complex, integrated aircraft programs: the X-29 forward swept wing; the advanced fighter technology integration (AFTI) F-16; and the highly maneuverable aircraft technology (HiMAT) program. Significant operating technologies, and the design errors which cause them, is examined to help identify what functions a system design/informatin tool should provide to assist designers in avoiding errors.
Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram
2014-04-01
Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.
Identification of Low Order Equivalent System Models From Flight Test Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.
Zheng, Shiqi; Tang, Xiaoqi; Song, Bao; Lu, Shaowu; Ye, Bosheng
2013-07-01
In this paper, a stable adaptive PI control strategy based on the improved just-in-time learning (IJITL) technique is proposed for permanent magnet synchronous motor (PMSM) drive. Firstly, the traditional JITL technique is improved. The new IJITL technique has less computational burden and is more suitable for online identification of the PMSM drive system which is highly real-time compared to traditional JITL. In this way, the PMSM drive system is identified by IJITL technique, which provides information to an adaptive PI controller. Secondly, the adaptive PI controller is designed in discrete time domain which is composed of a PI controller and a supervisory controller. The PI controller is capable of automatically online tuning the control gains based on the gradient descent method and the supervisory controller is developed to eliminate the effect of the approximation error introduced by the PI controller upon the system stability in the Lyapunov sense. Finally, experimental results on the PMSM drive system show accurate identification and favorable tracking performance. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Reliability and coverage analysis of non-repairable fault-tolerant memory systems
NASA Technical Reports Server (NTRS)
Cox, G. W.; Carroll, B. D.
1976-01-01
A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.
Reliability of System Identification Techniques to Assess Standing Balance in Healthy Elderly
Maier, Andrea B.; Aarts, Ronald G. K. M.; van Gerven, Joop M. A.; Arendzen, J. Hans; Schouten, Alfred C.; Meskers, Carel G. M.; van der Kooij, Herman
2016-01-01
Objectives System identification techniques have the potential to assess the contribution of the underlying systems involved in standing balance by applying well-known disturbances. We investigated the reliability of standing balance parameters obtained with multivariate closed loop system identification techniques. Methods In twelve healthy elderly balance tests were performed twice a day during three days. Body sway was measured during two minutes of standing with eyes closed and the Balance test Room (BalRoom) was used to apply four disturbances simultaneously: two sensory disturbances, to the proprioceptive and the visual system, and two mechanical disturbances applied at the leg and trunk segment. Using system identification techniques, sensitivity functions of the sensory disturbances and the neuromuscular controller were estimated. Based on the generalizability theory (G theory), systematic errors and sources of variability were assessed using linear mixed models and reliability was assessed by computing indexes of dependability (ID), standard error of measurement (SEM) and minimal detectable change (MDC). Results A systematic error was found between the first and second trial in the sensitivity functions. No systematic error was found in the neuromuscular controller and body sway. The reliability of 15 of 25 parameters and body sway were moderate to excellent when the results of two trials on three days were averaged. To reach an excellent reliability on one day in 7 out of 25 parameters, it was predicted that at least seven trials must be averaged. Conclusion This study shows that system identification techniques are a promising method to assess the underlying systems involved in standing balance in elderly. However, most of the parameters do not appear to be reliable unless a large number of trials are collected across multiple days. To reach an excellent reliability in one third of the parameters, a training session for participants is needed and at least seven trials of two minutes must be performed on one day. PMID:26953694
ERIC Educational Resources Information Center
Fouladi, Rachel T.
2000-01-01
Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…
Performance capabilities of a JPL dual-arm advanced teleoperation system
NASA Technical Reports Server (NTRS)
Szakaly, Z. F.; Bejczy, A. K.
1991-01-01
The system comprises: (1) two PUMA 560 robot arms, each equipped with the latest JPL developed smart hands which contain 3-D force/moment and grasp force sensors; (2) two general purpose force reflecting hand controllers; (3) a NS32016 microprocessors based distributed computing system together with JPL developed universal motor controllers; (4) graphics display of sensor data; (5) capabilities for time delay experiments; and (6) automatic data recording capabilities. Several different types of control modes are implemented on this system using different feedback control techniques. Some of the control modes and the related feedback control techniques are described, and the achievable control performance for tracking position and force trajectories are reported. The interaction between position and force trajectory tracking is illustrated. The best performance is obtained by using a novel, task space error feedback technique.
Abdelkarim, Noha; Mohamed, Amr E; El-Garhy, Ahmed M; Dorrah, Hassen T
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller.
Mohamed, Amr E.; Dorrah, Hassen T.
2016-01-01
The two-coupled distillation column process is a physically complicated system in many aspects. Specifically, the nested interrelationship between system inputs and outputs constitutes one of the significant challenges in system control design. Mostly, such a process is to be decoupled into several input/output pairings (loops), so that a single controller can be assigned for each loop. In the frame of this research, the Brain Emotional Learning Based Intelligent Controller (BELBIC) forms the control structure for each decoupled loop. The paper's main objective is to develop a parameterization technique for decoupling and control schemes, which ensures robust control behavior. In this regard, the novel optimization technique Bacterial Swarm Optimization (BSO) is utilized for the minimization of summation of the integral time-weighted squared errors (ITSEs) for all control loops. This optimization technique constitutes a hybrid between two techniques, which are the Particle Swarm and Bacterial Foraging algorithms. According to the simulation results, this hybridized technique ensures low mathematical burdens and high decoupling and control accuracy. Moreover, the behavior analysis of the proposed BELBIC shows a remarkable improvement in the time domain behavior and robustness over the conventional PID controller. PMID:27807444
Transient fault behavior in a microprocessor: A case study
NASA Technical Reports Server (NTRS)
Duba, Patrick
1989-01-01
An experimental analysis is described which studies the susceptibility of a microprocessor based jet engine controller to upsets caused by current and voltage transients. A design automation environment which allows the run time injection of transients and the tracing from their impact device to the pin level is described. The resulting error data are categorized by the charge levels of the injected transients by location and by their potential to cause logic upsets, latched errors, and pin errors. The results show a 3 picoCouloumb threshold, below which the transients have little impact. An Arithmetic and Logic Unit transient is most likely to result in logic upsets and pin errors (i.e., impact the external environment). The transients in the countdown unit are potentially serious since they can result in latched errors, thus causing latent faults. Suggestions to protect the processor against these errors, by incorporating internal error detection and transient suppression techniques, are also made.
A VLF-based technique in applications to digital control of nonlinear hybrid multirate systems
NASA Astrophysics Data System (ADS)
Vassilyev, Stanislav; Ulyanov, Sergey; Maksimkin, Nikolay
2017-01-01
In this paper, a technique for rigorous analysis and design of nonlinear multirate digital control systems on the basis of the reduction method and sublinear vector Lyapunov functions is proposed. The control system model under consideration incorporates continuous-time dynamics of the plant and discrete-time dynamics of the controller and takes into account uncertainties of the plant, bounded disturbances, nonlinear characteristics of sensors and actuators. We consider a class of multirate systems where the control update rate is slower than the measurement sampling rates and periodic non-uniform sampling is admitted. The proposed technique does not use the preliminary discretization of the system, and, hence, allows one to eliminate the errors associated with the discretization and improve the accuracy of analysis. The technique is applied to synthesis of digital controller for a flexible spacecraft in the fine stabilization mode and decentralized controller for a formation of autonomous underwater vehicles. Simulation results are provided to validate the good performance of the designed controllers.
Transient Faults in Computer Systems
NASA Technical Reports Server (NTRS)
Masson, Gerald M.
1993-01-01
A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.
Optimal interpolation analysis of leaf area index using MODIS data
Gu, Yingxin; Belair, Stephane; Mahfouf, Jean-Francois; Deblonde, Godelieve
2006-01-01
A simple data analysis technique for vegetation leaf area index (LAI) using Moderate Resolution Imaging Spectroradiometer (MODIS) data is presented. The objective is to generate LAI data that is appropriate for numerical weather prediction. A series of techniques and procedures which includes data quality control, time-series data smoothing, and simple data analysis is applied. The LAI analysis is an optimal combination of the MODIS observations and derived climatology, depending on their associated errors σo and σc. The “best estimate” LAI is derived from a simple three-point smoothing technique combined with a selection of maximum LAI (after data quality control) values to ensure a higher quality. The LAI climatology is a time smoothed mean value of the “best estimate” LAI during the years of 2002–2004. The observation error is obtained by comparing the MODIS observed LAI with the “best estimate” of the LAI, and the climatological error is obtained by comparing the “best estimate” of LAI with the climatological LAI value. The LAI analysis is the result of a weighting between these two errors. Demonstration of the method described in this paper is presented for the 15-km grid of Meteorological Service of Canada (MSC)'s regional version of the numerical weather prediction model. The final LAI analyses have a relatively smooth temporal evolution, which makes them more appropriate for environmental prediction than the original MODIS LAI observation data. They are also more realistic than the LAI data currently used operationally at the MSC which is based on land-cover databases.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw
2005-02-01
This paper describes the simulations and results obtained when applying optimal control to progressive sound-field reproduction (mainly for audio applications) over an area using multiple monopole loudspeakers. The model simulates a reproduction system that operates either in free field or in a closed space approaching a typical listening room, and is based on optimal control in the frequency domain. This rather simple approach is chosen for the purpose of physical investigation, especially in terms of sensing microphones and reproduction loudspeakers configurations. Other issues of interest concern the comparison with wave-field synthesis and the control mechanisms. The results suggest that in-room reproduction of sound field using active control can be achieved with a residual normalized squared error significantly lower than open-loop wave-field synthesis in the same situation. Active reproduction techniques have the advantage of automatically compensating for the room's natural dynamics. For the considered cases, the simulations show that optimal control results are not sensitive (in terms of reproduction error) to wall absorption in the reproduction room. A special surrounding configuration of sensors is introduced for a sensor-free listening area in free field.
Cheng, Ching-Min; Hwang, Sheue-Ling
2015-03-01
This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Suppressing relaxation in superconducting qubits by quasiparticle pumping.
Gustavsson, Simon; Yan, Fei; Catelani, Gianluigi; Bylander, Jonas; Kamal, Archana; Birenbaum, Jeffrey; Hover, David; Rosenberg, Danna; Samach, Gabriel; Sears, Adam P; Weber, Steven J; Yoder, Jonilyn L; Clarke, John; Kerman, Andrew J; Yoshihara, Fumiki; Nakamura, Yasunobu; Orlando, Terry P; Oliver, William D
2016-12-23
Dynamical error suppression techniques are commonly used to improve coherence in quantum systems. They reduce dephasing errors by applying control pulses designed to reverse erroneous coherent evolution driven by environmental noise. However, such methods cannot correct for irreversible processes such as energy relaxation. We investigate a complementary, stochastic approach to reducing errors: Instead of deterministically reversing the unwanted qubit evolution, we use control pulses to shape the noise environment dynamically. In the context of superconducting qubits, we implement a pumping sequence to reduce the number of unpaired electrons (quasiparticles) in close proximity to the device. A 70% reduction in the quasiparticle density results in a threefold enhancement in qubit relaxation times and a comparable reduction in coherence variability. Copyright © 2016, American Association for the Advancement of Science.
A Survey of Terrain Modeling Technologies and Techniques
2007-09-01
Washington , DC 20314-1000 ERDC/TEC TR-08-2 ii Abstract: Test planning, rehearsal, and distributed test events for Future Combat Systems (FCS) require...distance) for all five lines of control points. Blue circles are errors of DSM (original data), red squares are DTM (bare Earth, processed by Intermap...circles are DSM, red squares are DTM ........... 8 5 Distribution of errors for line No. 729. Blue circles are DSM, red squares are DTM
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Development of an adaptive hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1994-01-01
In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.
Wang, Yingyang; Hu, Jianbo
2018-05-19
An improved prescribed performance controller is proposed for the longitudinal model of an air-breathing hypersonic vehicle (AHV) subject to uncertain dynamics and input nonlinearity. Different from the traditional non-affine model requiring non-affine functions to be differentiable, this paper utilizes a semi-decomposed non-affine model with non-affine functions being locally semi-bounded and possibly in-differentiable. A new error transformation combined with novel prescribed performance functions is proposed to bypass complex deductions caused by conventional error constraint approaches and circumvent high frequency chattering in control inputs. On the basis of backstepping technique, the improved prescribed performance controller with low structural and computational complexity is designed. The methodology guarantees the altitude and velocity tracking error within transient and steady state performance envelopes and presents excellent robustness against uncertain dynamics and deadzone input nonlinearity. Simulation results demonstrate the efficacy of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
A Real-Time Position-Locating Algorithm for CCD-Based Sunspot Tracking
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
1996-01-01
NASA Marshall Space Flight Center's (MSFC) EXperimental Vector Magnetograph (EXVM) polarimeter measures the sun's vector magnetic field. These measurements are taken to improve understanding of the sun's magnetic field in the hopes to better predict solar flares. Part of the procedure for the EXVM requires image motion stabilization over a period of a few minutes. A high speed tracker can be used to reduce image motion produced by wind loading on the EXVM, fluctuations in the atmosphere and other vibrations. The tracker consists of two elements, an image motion detector and a control system. The image motion detector determines the image movement from one frame to the next and sends an error signal to the control system. For the ground based application to reduce image motion due to atmospheric fluctuations requires an error determination at the rate of at least 100 hz. It would be desirable to have an error determination rate of 1 kHz to assure that higher rate image motion is reduced and to increase the control system stability. Two algorithms are presented that are typically used for tracking. These algorithms are examined for their applicability for tracking sunspots, specifically their accuracy if only one column and one row of CCD pixels are used. To examine the accuracy of this method two techniques are used. One involves moving a sunspot image a known distance with computer software, then applying the particular algorithm to see how accurately it determines this movement. The second technique involves using a rate table to control the object motion, then applying the algorithms to see how accurately each determines the actual motion. Results from these two techniques are presented.
Image Registration: A Necessary Evil
NASA Technical Reports Server (NTRS)
Bell, James; McLachlan, Blair; Hermstad, Dexter; Trosin, Jeff; George, Michael W. (Technical Monitor)
1995-01-01
Registration of test and reference images is a key component of nearly all PSP data reduction techniques. This is done to ensure that a test image pixel viewing a particular point on the model is ratioed by the reference image pixel which views the same point. Typically registration is needed to account for model motion due to differing airloads when the wind-off and wind-on images are taken. Registration is also necessary when two cameras are used for simultaneous acquisition of data from a dual-frequency paint. This presentation will discuss the advantages and disadvantages of several different image registration techniques. In order to do so, it is necessary to propose both an accuracy requirement for image registration and a means for measuring the accuracy of a particular technique. High contrast regions in the unregistered images are most sensitive to registration errors, and it is proposed that these regions be used to establish the error limits for registration. Once this is done, the actual registration error can be determined by locating corresponding points on the test and reference images, and determining how well a particular registration technique matches them. An example of this procedure is shown for three transforms used to register images of a semispan model. Thirty control points were located on the model. A subset of the points were used to determine the coefficients of each registration transform, and the error with which each transform aligned the remaining points was determined. The results indicate the general superiority of a third-order polynomial over other candidate transforms, as well as showing how registration accuracy varies with number of control points. Finally, it is proposed that image registration may eventually be done away with completely. As more accurate image resection techniques and more detailed model surface grids become available, it will be possible to map raw image data onto the model surface accurately. Intensity ratio data can then be obtained by a "model surface ratio," rather than an image ratio. The problems and advantages of this technique will be discussed.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.
Mathias, Patrick C; Turner, Emily H; Scroggins, Sheena M; Salipante, Stephen J; Hoffman, Noah G; Pritchard, Colin C; Shirts, Brian H
2016-03-01
To apply techniques for ancestry and sex computation from next-generation sequencing (NGS) data as an approach to confirm sample identity and detect sample processing errors. We combined a principal component analysis method with k-nearest neighbors classification to compute the ancestry of patients undergoing NGS testing. By combining this calculation with X chromosome copy number data, we determined the sex and ancestry of patients for comparison with self-report. We also modeled the sensitivity of this technique in detecting sample processing errors. We applied this technique to 859 patient samples with reliable self-report data. Our k-nearest neighbors ancestry screen had an accuracy of 98.7% for patients reporting a single ancestry. Visual inspection of principal component plots was consistent with self-report in 99.6% of single-ancestry and mixed-ancestry patients. Our model demonstrates that approximately two-thirds of potential sample swaps could be detected in our patient population using this technique. Patient ancestry can be estimated from NGS data incidentally sequenced in targeted panels, enabling an inexpensive quality control method when coupled with patient self-report. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Parallel processing spacecraft communication system
NASA Technical Reports Server (NTRS)
Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)
1998-01-01
An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.
NASA Technical Reports Server (NTRS)
Birchenough, A. G.
1975-01-01
A digital speed control that can be combined with a proportional analog controller is described. The stability and transient response of the analog controller were retained and combined with the long-term accuracy of a crystal-controlled integral controller. A relatively simple circuit was developed by using phase-locked-loop techniques and total error storage. The integral digital controller will maintain speed control accuracy equal to that of the crystal reference oscillator.
Zaafouri, Abderrahmen; Regaya, Chiheb Ben; Azza, Hechmi Ben; Châari, Abdelkader
2016-01-01
This paper presents a modified structure of the backstepping nonlinear control of the induction motor (IM) fitted with an adaptive backstepping speed observer. The control design is based on the backstepping technique complemented by the introduction of integral tracking errors action to improve its robustness. Unlike other research performed on backstepping control with integral action, the control law developed in this paper does not propose the increase of the number of system state so as not increase the complexity of differential equations resolution. The digital simulation and experimental results show the effectiveness of the proposed control compared to the conventional PI control. The results analysis shows the characteristic robustness of the adaptive control to disturbances of the load, the speed variation and low speed. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Selecting a software development methodology. [of digital flight control systems
NASA Technical Reports Server (NTRS)
Jones, R. E.
1981-01-01
The state of the art analytical techniques for the development and verification of digital flight control software is studied and a practical designer oriented development and verification methodology is produced. The effectiveness of the analytic techniques chosen for the development and verification methodology are assessed both technically and financially. Technical assessments analyze the error preventing and detecting capabilities of the chosen technique in all of the pertinent software development phases. Financial assessments describe the cost impact of using the techniques, specifically, the cost of implementing and applying the techniques as well as the relizable cost savings. Both the technical and financial assessment are quantitative where possible. In the case of techniques which cannot be quantitatively assessed, qualitative judgements are expressed about the effectiveness and cost of the techniques. The reasons why quantitative assessments are not possible will be documented.
NASA Technical Reports Server (NTRS)
Gordon, Steven C.
1993-01-01
Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana
The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less
NASA Astrophysics Data System (ADS)
Wootton, James R.; Loss, Daniel
2018-05-01
The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.
A knowledge-based system design/information tool for aircraft flight control systems
NASA Technical Reports Server (NTRS)
Mackall, Dale A.; Allen, James G.
1989-01-01
Research aircraft have become increasingly dependent on advanced control systems to accomplish program goals. These aircraft are integrating multiple disciplines to improve performance and satisfy research objectives. This integration is being accomplished through electronic control systems. Because of the number of systems involved and the variety of engineering disciplines, systems design methods and information management have become essential to program success. The primary objective of the system design/information tool for aircraft flight control system is to help transfer flight control system design knowledge to the flight test community. By providing all of the design information and covering multiple disciplines in a structured, graphical manner, flight control systems can more easily be understood by the test engineers. This will provide the engineers with the information needed to thoroughly ground test the system and thereby reduce the likelihood of serious design errors surfacing in flight. The secondary objective is to apply structured design techniques to all of the design domains. By using the techniques in the top level system design down through the detailed hardware and software designs, it is hoped that fewer design anomalies will result. The flight test experiences of three highly complex, integrated aircraft programs are reviewed: the X-29 forward-swept wing, the advanced fighter technology integration (AFTI) F-16, and the highly maneuverable aircraft technology (HiMAT) program. Significant operating anomalies and the design errors which cause them, are examined to help identify what functions a system design/information tool should provide to assist designers in avoiding errors.
Intervention strategies for the management of human error
NASA Technical Reports Server (NTRS)
Wiener, Earl L.
1993-01-01
This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.
Error field optimization in DIII-D using extremum seeking control
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; ...
2016-06-03
A closed-loop error field control algorithm is implemented in the Plasma Control System of the DIII-D tokamak and used to identify optimal control currents during a single plasma discharge. The algorithm, based on established extremum seeking control theory, exploits the link in tokamaks between maximizing the toroidal angular momentum and minimizing deleterious non-axisymmetric magnetic fields. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coilmore » currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.« less
NASA Astrophysics Data System (ADS)
Courchesne, Samuel
Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.
A technique for designing active control systems for astronomical telescope mirrors
NASA Technical Reports Server (NTRS)
Howell, W. E.; Creedon, J. F.
1973-01-01
The problem of designing a control system to achieve and maintain the required surface accuracy of the primary mirror of a large space telescope was considered. Control over the mirror surface is obtained through the application of a corrective force distribution by actuators located on the rear surface of the mirror. The design procedure is an extension of a modal control technique developed for distributed parameter plants with known eigenfunctions to include plants whose eigenfunctions must be approximated by numerical techniques. Instructions are given for constructing the mathematical model of the system, and a design procedure is developed for use with typical numerical data in selecting the number and location of the actuators. Examples of actuator patterns and their effect on various errors are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Suh, T; Park, S
2015-06-15
Purpose: The dose-related effects of patient setup errors on biophysical indices were evaluated for conventional wedge (CW) and field-in-field (FIF) whole breast irradiation techniques. Methods: The treatment plans for 10 patients receiving whole left breast irradiation were retrospectively selected. Radiobiological and physical effects caused by dose variations were evaluated by shifting the isocenters and gantry angles of the treatment plans. Dose-volume histograms of the planning target volume (PTV), heart, and lungs were generated, and conformity index (CI), homogeneity index (HI), tumor control probability (TCP), and normal tissue complication probability (NTCP) were determined. Results: For “isocenter shift plan” with posterior direction,more » the D95 of the PTV decreased by approximately 15% and the TCP of the PTV decreased by approximately 50% for the FIF technique and by 40% for the CW; however, the NTCPs of the lungs and heart increased by about 13% and 1%, respectively, for both techniques. Increasing the gantry angle decreased the TCPs of the PTV by 24.4% (CW) and by 34% (FIF). The NTCPs for the two techniques differed by only 3%. In case of CW, the CIs and HIs were much higher than that of the FIF in all cases. It had a significant difference between two techniques (p<0.01). According to our results, however, the FIF had more sensitive response by set up errors rather than CW in bio-physical aspects. Conclusions: The radiobiological-based analysis can detect significant dosimetric errors then, can provide a practical patient quality assurance method to guide the radiobiological and physical effects.« less
NASA Technical Reports Server (NTRS)
Coon, Craig R.; Cardullo, Frank M.; Zaychik, Kirill B.
2014-01-01
The ability to develop highly advanced simulators is a critical need that has the ability to significantly impact the aerospace industry. The aerospace industry is advancing at an ever increasing pace and flight simulators must match this development with ever increasing urgency. In order to address both current problems and potential advancements with flight simulator techniques, several aspects of current control law technology of the National Aeronautics and Space Administration (NASA) Langley Research Center's Cockpit Motion Facility (CMF) motion base simulator were examined. Preliminary investigation of linear models based upon hardware data were examined to ensure that the most accurate models are used. This research identified both system improvements in the bandwidth and more reliable linear models. Advancements in the compensator design were developed and verified through multiple techniques. The position error rate feedback, the acceleration feedback and the force feedback were all analyzed in the heave direction using the nonlinear model of the hardware. Improvements were made using the position error rate feedback technique. The acceleration feedback compensator also provided noteworthy improvement, while attempts at implementing a force feedback compensator proved unsuccessful.
Fisher, Moria E; Huang, Felix C; Wright, Zachary A; Patton, James L
2014-01-01
Manipulation of error feedback has been of great interest to recent studies in motor control and rehabilitation. Typically, motor adaptation is shown as a change in performance with a single scalar metric for each trial, yet such an approach might overlook details about how error evolves through the movement. We believe that statistical distributions of movement error through the extent of the trajectory can reveal unique patterns of adaption and possibly reveal clues to how the motor system processes information about error. This paper describes different possible ordinate domains, focusing on representations in time and state-space, used to quantify reaching errors. We hypothesized that the domain with the lowest amount of variability would lead to a predictive model of reaching error with the highest accuracy. Here we showed that errors represented in a time domain demonstrate the least variance and allow for the highest predictive model of reaching errors. These predictive models will give rise to more specialized methods of robotic feedback and improve previous techniques of error augmentation.
Masking technique for coating thickness control on large and strongly curved aspherical optics.
Sassolas, B; Flaminio, R; Franc, J; Michel, C; Montorio, J-L; Morgado, N; Pinard, L
2009-07-01
We discuss a method to control the coating thickness deposited onto large and strongly curved optics by ion beam sputtering. The technique uses an original design of the mask used to screen part of the sputtered materials. A first multielement mask is calculated from the measured two-dimensional coating thickness distribution. Then, by means of an iterative process, the final mask is designed. By using such a technique, it has been possible to deposit layers of tantalum pentoxide having a high thickness gradient onto a curved substrate 500 mm in diameter. Residual errors in the coating thickness profile are below 0.7%.
NASA Astrophysics Data System (ADS)
Gao, Gang; Wang, Jinzhi; Wang, Xianghua
2017-05-01
This paper investigates fault-tolerant control (FTC) for feedback linearisable systems (FLSs) and its application to an aircraft. To ensure desired transient and steady-state behaviours of the tracking error under actuator faults, the dynamic effect caused by the actuator failures on the error dynamics of a transformed model is analysed, and three control strategies are designed. The first FTC strategy is proposed as a robust controller, which relies on the explicit information about several parameters of the actuator faults. To eliminate the need for these parameters and the input chattering phenomenon, the robust control law is later combined with the adaptive technique to generate the adaptive FTC law. Next, the adaptive control law is further improved to achieve the prescribed performance under more severe input disturbance. Finally, the proposed control laws are applied to an air-breathing hypersonic vehicle (AHV) subject to actuator failures, which confirms the effectiveness of the proposed strategies.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less
Correlation and registration of ERTS multispectral imagery. [by a digital processing technique
NASA Technical Reports Server (NTRS)
Bonrud, L. O.; Henrikson, P. J.
1974-01-01
Examples of automatic digital processing demonstrate the feasibility of registering one ERTS multispectral scanner (MSS) image with another obtained on a subsequent orbit, and automatic matching, correlation, and registration of MSS imagery with aerial photography (multisensor correlation) is demonstrated. Excellent correlation was obtained with patch sizes exceeding 16 pixels square. Qualities which lead to effective control point selection are distinctive features, good contrast, and constant feature characteristics. Results of the study indicate that more than 300 degrees of freedom are required to register two standard ERTS-1 MSS frames covering 100 by 100 nautical miles to an accuracy of 0.6 pixel mean radial displacement error. An automatic strip processing technique demonstrates 600 to 1200 degrees of freedom over a quater frame of ERTS imagery. Registration accuracies in the range of 0.3 pixel to 0.5 pixel mean radial error were confirmed by independent error analysis. Accuracies in the range of 0.5 pixel to 1.4 pixel mean radial error were demonstrated by semi-automatic registration over small geographic areas.
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Smith, W. T.
1990-01-01
Surface errors on parabolic reflector antennas degrade the overall performance of the antenna. Space antenna structures are difficult to build, deploy and control. They must maintain a nearly perfect parabolic shape in a harsh environment and must be lightweight. Electromagnetic compensation for surface errors in large space reflector antennas can be used to supplement mechanical compensation. Electromagnetic compensation for surface errors in large space reflector antennas has been the topic of several research studies. Most of these studies try to correct the focal plane fields of the reflector near the focal point and, hence, compensate for the distortions over the whole radiation pattern. An alternative approach to electromagnetic compensation is presented. The proposed technique uses pattern synthesis to compensate for the surface errors. The pattern synthesis approach uses a localized algorithm in which pattern corrections are directed specifically towards portions of the pattern requiring improvement. The pattern synthesis technique does not require knowledge of the reflector surface. It uses radiation pattern data to perform the compensation.
Diffraction-based overlay metrology for double patterning technologies
NASA Astrophysics Data System (ADS)
Dasari, Prasad; Korlahalli, Rahul; Li, Jie; Smith, Nigel; Kritsun, Oleg; Volkman, Cathy
2009-03-01
The extension of optical lithography to 32nm and beyond is made possible by Double Patterning Techniques (DPT) at critical levels of the process flow. The ease of DPT implementation is hindered by increased significance of critical dimension uniformity and overlay errors. Diffraction-based overlay (DBO) has shown to be an effective metrology solution for accurate determination of the overlay errors associated with double patterning [1, 2] processes. In this paper we will report its use in litho-freeze-litho-etch (LFLE) and spacer double patterning technology (SDPT), which are pitch splitting solutions that reduce the significance of overlay errors. Since the control of overlay between various mask/level combinations is critical for fabrication, precise and accurate assessment of errors by advanced metrology techniques such as spectroscopic diffraction based overlay (DBO) and traditional image-based overlay (IBO) using advanced target designs will be reported. A comparison between DBO, IBO and CD-SEM measurements will be reported. . A discussion of TMU requirements for 32nm technology and TMU performance data of LFLE and SDPT targets by different overlay approaches will be presented.
Error budgeting single and two qubit gates in a superconducting qubit
NASA Astrophysics Data System (ADS)
Chen, Z.; Chiaro, B.; Dunsworth, A.; Foxen, B.; Neill, C.; Quintana, C.; Wenner, J.; Martinis, John. M.; Google Quantum Hardware Team Team
Superconducting qubits have shown promise as a platform for both error corrected quantum information processing and demonstrations of quantum supremacy. High fidelity quantum gates are crucial to achieving both of these goals, and superconducting qubits have demonstrated two qubit gates exceeding 99% fidelity. In order to improve gate fidelity further, we must understand the remaining sources of error. In this talk, I will demonstrate techniques for quantifying the contributions of control, decoherence, and leakage to gate error, for both single and two qubit gates. I will also discuss the near term outlook for achieving quantum supremacy using a gate-based approach in superconducting qubits. This work is supported Google Inc., and by the National Science Foundation Graduate Research Fellowship under Grant No. DGE 1605114.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
ERIC Educational Resources Information Center
Hand, Michael L.
1990-01-01
Use of the bootstrap resampling technique (BRT) is assessed in its application to resampling analysis associated with measurement of payment allocation errors by federally funded Family Assistance Programs. The BRT is applied to a food stamp quality control database in Oregon. This analysis highlights the outlier-sensitivity of the…
Neighboring extremal optimal control design including model mismatch errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.J.; Hull, D.G.
1994-11-01
The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.
A Radial Basis Function Approach to Financial Time Series Analysis
1993-12-01
including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes
Electrical termination techniques
NASA Technical Reports Server (NTRS)
Oakey, W. E.; Schleicher, R. R.
1976-01-01
A technical review of high reliability electrical terminations for electronic equipment was made. Seven techniques were selected from this review for further investigation, experimental work, and preliminary testing. From the preliminary test results, four techniques were selected for final testing and evaluation. These four were: (1) induction soldering, (2) wire wrap, (3) percussive arc welding, and (4) resistance welding. Of these four, induction soldering was selected as the best technique in terms of minimizing operator errors, controlling temperature and time, minimizing joint contamination, and ultimately producing a reliable, uniform, and reusable electrical termination.
Efficient Variational Quantum Simulator Incorporating Active Error Minimization
NASA Astrophysics Data System (ADS)
Li, Ying; Benjamin, Simon C.
2017-04-01
One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.
NASA Astrophysics Data System (ADS)
Cui, Guozeng; Xu, Shengyuan; Ma, Qian; Li, Yongmin; Zhang, Zhengqiang
2018-05-01
In this paper, the problem of prescribed performance distributed output consensus for higher-order non-affine nonlinear multi-agent systems with unknown dead-zone input is investigated. Fuzzy logical systems are utilised to identify the unknown nonlinearities. By introducing prescribed performance, the transient and steady performance of synchronisation errors are guaranteed. Based on Lyapunov stability theory and the dynamic surface control technique, a new distributed consensus algorithm for non-affine nonlinear multi-agent systems is proposed, which ensures cooperatively uniformly ultimately boundedness of all signals in the closed-loop systems and enables the output of each follower to synchronise with the leader within predefined bounded error. Finally, simulation examples are provided to demonstrate the effectiveness of the proposed control scheme.
Patterned wafer geometry grouping for improved overlay control
NASA Astrophysics Data System (ADS)
Lee, Honggoo; Han, Sangjun; Woo, Jaeson; Park, Junbeom; Song, Changrock; Anis, Fatima; Vukkadala, Pradeep; Jeon, Sanghuck; Choi, DongSub; Huang, Kevin; Heo, Hoyoung; Smith, Mark D.; Robinson, John C.
2017-03-01
Process-induced overlay errors from outside the litho cell have become a significant contributor to the overlay error budget including non-uniform wafer stress. Previous studies have shown the correlation between process-induced stress and overlay and the opportunity for improvement in process control, including the use of patterned wafer geometry (PWG) metrology to reduce stress-induced overlay signatures. Key challenges of volume semiconductor manufacturing are how to improve not only the magnitude of these signatures, but also the wafer to wafer variability. This work involves a novel technique of using PWG metrology to provide improved litho-control by wafer-level grouping based on incoming process induced overlay, relevant for both 3D NAND and DRAM. Examples shown in this study are from 19 nm DRAM manufacturing.
Wang, Minlin; Ren, Xuemei; Chen, Qiang
2018-01-01
The multi-motor servomechanism (MMS) is a multi-variable, high coupling and nonlinear system, which makes the controller design challenging. In this paper, an adaptive robust H-infinity control scheme is proposed to achieve both the load tracking and multi-motor synchronization of MMS. This control scheme consists of two parts: a robust tracking controller and a distributed synchronization controller. The robust tracking controller is constructed by incorporating a neural network (NN) K-filter observer into the dynamic surface control, while the distributed synchronization controller is designed by combining the mean deviation coupling control strategy with the distributed technique. The proposed control scheme has several merits: 1) by using the mean deviation coupling synchronization control strategy, the tracking controller and the synchronization controller can be designed individually without any coupling problem; 2) the immeasurable states and unknown nonlinearities are handled by a NN K-filter observer, where the number of NN weights is largely reduced by using the minimal learning parameter technique; 3) the H-infinity performances of tracking error and synchronization error are guaranteed by introducing a robust term into the tracking controller and the synchronization controller, respectively. The stabilities of the tracking and synchronization control systems are analyzed by the Lyapunov theory. Simulation and experimental results based on a four-motor servomechanism are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ostroff, A. J.
1973-01-01
Some of the major difficulties associated with large orbiting astronomical telescopes are the cost of manufacturing the primary mirror to precise tolerances and the maintaining of diffraction-limited tolerances while in orbit. One successfully demonstrated approach for minimizing these problem areas is the technique of actively deforming the primary mirror by applying discrete forces to the rear of the mirror. A modal control technique, as applied to active optics, has previously been developed and analyzed. The modal control technique represents the plant to be controlled in terms of its eigenvalues and eigenfunctions which are estimated via numerical approximation techniques. The report includes an extension of previous work using the modal control technique and also describes an optimal feedback controller. The equations for both control laws are developed in state-space differential form and include such considerations as stability, controllability, and observability. These equations are general and allow the incorporation of various mode-analyzer designs; two design approaches are presented. The report also includes a technique for placing actuator and sensor locations at points on the mirror based upon the flexibility matrix of the uncontrolled or unobserved modes of the structure. The locations selected by this technique are used in the computer runs which are described. The results are based upon three different initial error distributions, two mode-analyzer designs, and both the modal and optimal control laws.
NASA Astrophysics Data System (ADS)
Gueddana, Amor; Attia, Moez; Chatta, Rihab
2015-03-01
In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.
Inui, Hiroshi; Taketomi, Shuji; Tahara, Keitarou; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2017-03-01
Bone cutting errors can cause malalignment of unicompartmental knee arthroplasties (UKA). Although the extent of tibial malalignment due to horizontal cutting errors has been well reported, there is a lack of studies evaluating malalignment as a consequence of keel cutting errors, particularly in the Oxford UKA. The purpose of this study was to examine keel cutting errors during Oxford UKA placement using a navigation system and to clarify whether two different tibial keel cutting techniques would have different error rates. The alignment of the tibial cut surface after a horizontal osteotomy and the surface of the tibial trial component was measured with a navigation system. Cutting error was defined as the angular difference between these measurements. The following two techniques were used: the standard "pushing" technique in 83 patients (group P) and a modified "dolphin" technique in 41 patients (group D). In all 123 patients studied, the mean absolute keel cutting error was 1.7° and 1.4° in the coronal and sagittal planes, respectively. In group P, there were 22 outlier patients (27 %) in the coronal plane and 13 (16 %) in the sagittal plane. Group D had three outlier patients (8 %) in the coronal plane and none (0 %) in the sagittal plane. Significant differences were observed in the outlier ratio of these techniques in both the sagittal (P = 0.014) and coronal (P = 0.008) planes. Our study demonstrated overall keel cutting errors of 1.7° in the coronal plane and 1.4° in the sagittal plane. The "dolphin" technique was found to significantly reduce keel cutting errors on the tibial side. This technique will be useful for accurate component positioning and therefore improve the longevity of Oxford UKAs. Retrospective comparative study, Level III.
NASA Astrophysics Data System (ADS)
Rose, Michael Benjamin
A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical formulations that are discussed are applicable to ascent on Earth or other planets as well as other rocket-powered systems such as sounding rockets and ballistic missiles.
Applications of optical sensing for laser cutting and drilling.
Fox, Mahlen D T; French, Paul; Peters, Chris; Hand, Duncan P; Jones, Julian D C
2002-08-20
Any reliable automated production system must include process control and monitoring techniques. Two laser processing techniques potentially lending themselves to automation are percussion drilling and cutting. For drilling we investigate the performance of a modification of a nonintrusive optical focus control system we previously developed for laser welding, which exploits the chromatic aberrations of the processing optics to determine focal error. We further developed this focus control system for closed-loop control of laser cutting. We show that an extension of the technique can detect deterioration in cut quality, and we describe practical trials carried out on different materials using both oxygen and nitrogen assist gas. We base our techniques on monitoring the light generated by the process, captured nonintrusively by the effector optics and processed remotely from the workpiece. We describe the relationship between the temporal and the chromatic modulation of the detected light and process quality and show how the information can be used as the basis of a process control system.
Liu, Zongcheng; Dong, Xinmin; Xue, Jianping; Li, Hongbo; Chen, Yong
2016-09-01
This brief addresses the adaptive control problem for a class of pure-feedback systems with nonaffine functions possibly being nondifferentiable. Without using the mean value theorem, the difficulty of the control design for pure-feedback systems is overcome by modeling the nonaffine functions appropriately. With the help of neural network approximators, an adaptive neural controller is developed by combining the dynamic surface control (DSC) and minimal learning parameter (MLP) techniques. The key features of our approach are that, first, the restrictive assumptions on the partial derivative of nonaffine functions are removed, second, the DSC technique is used to avoid "the explosion of complexity" in the backstepping design, and the number of adaptive parameters is reduced significantly using the MLP technique, third, smooth robust compensators are employed to circumvent the influences of approximation errors and disturbances. Furthermore, it is proved that all the signals in the closed-loop system are semiglobal uniformly ultimately bounded. Finally, the simulation results are provided to demonstrate the effectiveness of the designed method.
NASA Technical Reports Server (NTRS)
Larson, T. J.; Ehernberger, L. J.
1985-01-01
The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.
Mitchell, W G; Chavez, J M; Baker, S A; Guzman, B L; Azen, S P
1990-07-01
Maturation of sustained attention was studied in a group of 52 hyperactive elementary school children and 152 controls using a microcomputer-based test formatted to resemble a video game. In nonhyperactive children, both simple and complex reaction time decreased with age, as did variability of response time. Omission errors were extremely infrequent on simple reaction time and decreased with age on the more complex tasks. Commission errors had an inconsistent relationship with age. Hyperactive children were slower, more variable, and made more errors on all segments of the game than did controls. Both motor speed and calculated mental speed were slower in hyperactive children, with greater discrepancy for responses directed to the nondominant hand, suggesting that a selective right hemisphere deficit may be present in hyperactives. A summary score (number of individual game scores above the 95th percentile) of 4 or more detected 60% of hyperactive subjects with a false positive rate of 5%. Agreement with the Matching Familiar Figures Test was 75% in the hyperactive group.
Writing executable assertions to test flight software
NASA Technical Reports Server (NTRS)
Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.
1984-01-01
An executable assertion is a logical statement about the variables or a block of code. If there is no error during execution, the assertion statement results in a true value. Executable assertions can be used for dynamic testing of software. They can be employed for validation during the design phase, and exception and error detection during the operation phase. The present investigation is concerned with the problem of writing executable assertions, taking into account the use of assertions for testing flight software. They can be employed for validation during the design phase, and for exception handling and error detection during the operation phase The digital flight control system and the flight control software are discussed. The considered system provides autopilot and flight director modes of operation for automatic and manual control of the aircraft during all phases of flight. Attention is given to techniques for writing and using assertions to test flight software, an experimental setup to test flight software, and language features to support efficient use of assertions.
Robust preview control for a class of uncertain discrete-time systems with time-varying delay.
Li, Li; Liao, Fucheng
2018-02-01
This paper proposes a concept of robust preview tracking control for uncertain discrete-time systems with time-varying delay. Firstly, a model transformation is employed for an uncertain discrete system with time-varying delay. Then, the auxiliary variables related to the system state and input are introduced to derive an augmented error system that includes future information on the reference signal. This leads to the tracking problem being transformed into a regulator problem. Finally, for the augmented error system, a sufficient condition of asymptotic stability is derived and the preview controller design method is proposed based on the scaled small gain theorem and linear matrix inequality (LMI) technique. The method proposed in this paper not only solves the difficulty problem of applying the difference operator to the time-varying matrices but also simplifies the structure of the augmented error system. The numerical simulation example also illustrates the effectiveness of the results presented in the paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.
2014-01-01
Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877
Forecasting in foodservice: model development, testing, and evaluation.
Miller, J L; Thompson, P A; Orabella, M M
1991-05-01
This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.
Dosimetric effects of patient rotational setup errors on prostate IMRT treatments
NASA Astrophysics Data System (ADS)
Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.
2006-10-01
The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.
Topographic analysis of individual activation patterns in medial frontal cortex in schizophrenia
Stern, Emily R.; Welsh, Robert C.; Fitzgerald, Kate D.; Taylor, Stephan F.
2009-01-01
Individual variability in the location of neural activations poses a unique problem for neuroimaging studies employing group averaging techniques to investigate the neural bases of cognitive and emotional functions. This may be especially challenging for studies examining patient groups, which often have limited sample sizes and increased intersubject variability. In particular, medial frontal cortex (MFC) dysfunction is thought to underlie performance monitoring dysfunction among patients with previous studies using group averaging to have yielded conflicting results. schizophrenia, yet compare schizophrenic patients to controls To examine individual activations in MFC associated with two aspects of performance monitoring, interference and error processing, functional magnetic resonance imaging (fMRI) data were acquired while 17 patients with schizophrenia and 21 healthy controls performed an event-related version of the multi-source interference task. Comparisons of averaged data revealed few differences between the groups. By contrast, topographic analysis of individual activations for errors showed that control subjects exhibited activations spanning across both posterior and anterior regions of MFC while patients primarily activated posterior MFC, possibly reflecting an impaired emotional response to errors in schizophrenia. This discrepancy between topographic and group-averaged results may be due to the significant dispersion among individual activations, particularly among healthy controls, highlighting the importance of considering intersubject variability when interpreting the medial frontal response to error commission. PMID:18819107
Energy and Quality-Aware Multimedia Signal Processing
NASA Astrophysics Data System (ADS)
Emre, Yunus
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
Application of Lamendin's adult dental aging technique to a diverse skeletal sample.
Prince, Debra A; Ubelaker, Douglas H
2002-01-01
Lamendin et al. (1) proposed a technique to estimate age at death for adults by analyzing single-rooted teeth. They expressed age as a function of two factors: translucency of the tooth root and periodontosis (gingival regression). In their study, they analyzed 306 singled rooted teeth that were extracted at autopsy from 208 individuals of known age at death, all of whom were considered as having a French ancestry. Their sample consisted of 135 males, 73 females, 198 whites, and 10 blacks. The sample ranged in age from 22 to 90 years of age. By using a simple formulae (A = 0.18 x P + 0.42 x T + 25.53, where A = Age in years, P = Periodontosis height x 100/root height, and T = Transparency height x 100/root height), Lamendin et al. were able to estimate age at death with a mean error of +/- 10 years on their working sample and +/- 8.4 years on a forensic control sample. Lamendin found this technique to work well with a French population, but did not test it outside of that sample area. This study tests the accuracy of this adult aging technique on a more diverse skeletal population, the Terry Collection housed at the Smithsonian's National Museum of Natural History. Our sample consists of 400 teeth from 94 black females, 72 white females, 98 black males, and 95 white males, ranging from 25 to 99 years. Lamendin's technique was applied to this sample to test its applicability to a population not of French origin. Providing results from a diverse skeletal population will aid in establishing the validity of this method to be used in forensic cases, its ideal purpose. Our results suggest that Lamendin's method estimates age fairly accurately outside of the French sample yielding a mean error of 8.2 years, standard deviation 6.9 years, and standard error of the mean 0.34 years. In addition, when ancestry and sex are accounted for, the mean errors are reduced for each group (black females, white females, black males, and white males). Lamendin et al. reported an inter-observer error of 9+/-1.8 and 10+/-2 sears from two independent observers. Forty teeth were randomly remeasured from the Terry Collection in order to assess an intra-observer error. From this retest, an intra-observer error of 6.5 years was detected.
Yong-Feng Gao; Xi-Ming Sun; Changyun Wen; Wei Wang
2017-07-01
This paper is concerned with the problem of adaptive tracking control for a class of uncertain nonlinear systems with nonsymmetric input saturation and immeasurable states. The radial basis function of neural network (NN) is employed to approximate unknown functions, and an NN state observer is designed to estimate the immeasurable states. To analyze the effect of input saturation, an auxiliary system is employed. By the aid of adaptive backstepping technique, an adaptive tracking control approach is developed. Under the proposed adaptive tracking controller, the boundedness of all the signals in the closed-loop system is achieved. Moreover, distinct from most of the existing references, the tracking error can be bounded by an explicit function of design parameters and saturation input error. Finally, an example is given to show the effectiveness of the proposed method.
Application of identification techniques to remote manipulator system flight data
NASA Technical Reports Server (NTRS)
Shepard, G. D.; Lepanto, J. A.; Metzinger, R. W.; Fogel, E.
1983-01-01
This paper addresses the application of identification techniques to flight data from the Space Shuttle Remote Manipulator System (RMS). A description of the remote manipulator, including structural and control system characteristics, sensors, and actuators is given. A brief overview of system identification procedures is presented, and the practical aspects of implementing system identification algorithms are discussed. In particular, the problems posed by desampling rate, numerical error, and system nonlinearities are considered. Simulation predictions of damping, frequency, and system order are compared with values identified from flight data to support an evaluation of RMS structural and control system models. Finally, conclusions are drawn regarding the application of identification techniques to flight data obtained from a flexible space structure.
Focus control enhancement and on-product focus response analysis methodology
NASA Astrophysics Data System (ADS)
Kim, Young Ki; Chen, Yen-Jen; Hao, Xueli; Samudrala, Pavan; Gomez, Juan-Manuel; Mahoney, Mark O.; Kamalizadeh, Ferhad; Hanson, Justin K.; Lee, Shawn; Tian, Ye
2016-03-01
With decreasing CDOF (Critical Depth Of Focus) for 20/14nm technology and beyond, focus errors are becoming increasingly critical for on-product performance. Current on product focus control techniques in high volume manufacturing are limited; It is difficult to define measurable focus error and optimize focus response on product with existing methods due to lack of credible focus measurement methodologies. Next to developments in imaging and focus control capability of scanners and general tool stability maintenance, on-product focus control improvements are also required to meet on-product imaging specifications. In this paper, we discuss focus monitoring, wafer (edge) fingerprint correction and on-product focus budget analysis through diffraction based focus (DBF) measurement methodology. Several examples will be presented showing better focus response and control on product wafers. Also, a method will be discussed for a focus interlock automation system on product for a high volume manufacturing (HVM) environment.
New nonlinear control algorithms for multiple robot arms
NASA Technical Reports Server (NTRS)
Tarn, T. J.; Bejczy, A. K.; Yun, X.
1988-01-01
Multiple coordinated robot arms are modeled by considering the arms as closed kinematic chains and as a force-constrained mechanical system working on the same object simultaneously. In both formulations, a novel dynamic control method is discussed. It is based on feedback linearization and simultaneous output decoupling technique. By applying a nonlinear feedback and a nonlinear coordinate transformation, the complicated model of the multiple robot arms in either formulation is converted into a linear and output decoupled system. The linear system control theory and optimal control theory are used to design robust controllers in the task space. The first formulation has the advantage of automatically handling the coordination and load distribution among the robot arms. In the second formulation, it was found that by choosing a general output equation it became possible simultaneously to superimpose the position and velocity error feedback with the force-torque error feedback in the task space.
Fuzzy Adaptive Output Feedback Control of Uncertain Nonlinear Systems With Prescribed Performance.
Zhang, Jin-Xi; Yang, Guang-Hong
2018-05-01
This paper investigates the tracking control problem for a family of strict-feedback systems in the presence of unknown nonlinearities and immeasurable system states. A low-complexity adaptive fuzzy output feedback control scheme is proposed, based on a backstepping method. In the control design, a fuzzy adaptive state observer is first employed to estimate the unmeasured states. Then, a novel error transformation approach together with a new modification mechanism is introduced to guarantee the finite-time convergence of the output error to a predefined region and ensure the closed-loop stability. Compared with the existing methods, the main advantages of our approach are that: 1) without using extra command filters or auxiliary dynamic surface control techniques, the problem of explosion of complexity can still be addressed and 2) the design procedures are independent of the initial conditions. Finally, two practical examples are performed to further illustrate the above theoretic findings.
Fast and error-resilient coherent control in an atomic vapor
NASA Astrophysics Data System (ADS)
He, Yizun; Wang, Mengbing; Zhao, Jian; Qiu, Liyang; Wang, Yuzhuo; Fang, Yami; Zhao, Kaifeng; Wu, Saijun
2017-04-01
Nanosecond chirped pulses from an optical arbitrary waveform generator is applied to both invert and coherently split the D1 line population of potassium vapor within a laser focal volume of 2X105 μ m3. The inversion fidelity of f>96%, mainly limited by spontaneous emission during the nanosecond pulse, is inferred from both probe light transmission and superfluorescence emission. The nearly perfect inversion is uniformly achieved for laser intensity varying over an order of magnitude, and is tolerant to detuning error of more than 1000 times the D1 transition linewidth. We further demonstrate enhanced intensity error resilience with multiple chirped pulses and ``universal composite pulses''. This fast and robust coherent control technique should find wide applications in the field of quantum optics, laser cooling, and atom interferometry. This work is supported by National Key Research Program of China under Grant No. 2016YFA0302000, and NNSFC under Grant No. 11574053.
Addressee Errors in ATC Communications: The Call Sign Problem
NASA Technical Reports Server (NTRS)
Monan, W. P.
1983-01-01
Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.
Analysis of RDSS positioning accuracy based on RNSS wide area differential technique
NASA Astrophysics Data System (ADS)
Xing, Nan; Su, RanRan; Zhou, JianHua; Hu, XiaoGong; Gong, XiuQiang; Liu, Li; He, Feng; Guo, Rui; Ren, Hui; Hu, GuangMing; Zhang, Lei
2013-10-01
The BeiDou Navigation Satellite System (BDS) provides Radio Navigation Service System (RNSS) as well as Radio Determination Service System (RDSS). RDSS users can obtain positioning by responding the Master Control Center (MCC) inquiries to signal transmitted via GEO satellite transponder. The positioning result can be calculated with elevation constraint by MCC. The primary error sources affecting the RDSS positioning accuracy are the RDSS signal transceiver delay, atmospheric trans-mission delay and GEO satellite position error. During GEO orbit maneuver, poor orbit forecast accuracy significantly impacts RDSS services. A real-time 3-D orbital correction method based on wide-area differential technique is raised to correct the orbital error. Results from the observation shows that the method can successfully improve positioning precision during orbital maneuver, independent from the RDSS reference station. This improvement can reach 50% in maximum. Accurate calibration of the RDSS signal transceiver delay precision and digital elevation map may have a critical role in high precise RDSS positioning services.
An automatic tooth preparation technique: A preliminary study
NASA Astrophysics Data System (ADS)
Yuan, Fusong; Wang, Yong; Zhang, Yaopeng; Sun, Yuchun; Wang, Dangxiao; Lyu, Peijun
2016-04-01
The aim of this study is to validate the feasibility and accuracy of a new automatic tooth preparation technique in dental healthcare. An automatic tooth preparation robotic device with three-dimensional motion planning software was developed, which controlled an ultra-short pulse laser (USPL) beam (wavelength 1,064 nm, pulse width 15 ps, output power 30 W, and repeat frequency rate 100 kHz) to complete the tooth preparation process. A total of 15 freshly extracted human intact first molars were collected and fixed into a phantom head, and the target preparation shapes of these molars were designed using customised computer-aided design (CAD) software. The accuracy of tooth preparation was evaluated using the Geomagic Studio and Imageware software, and the preparing time of each tooth was recorded. Compared with the target preparation shape, the average shape error of the 15 prepared molars was 0.05-0.17 mm, the preparation depth error of the occlusal surface was approximately 0.097 mm, and the error of the convergence angle was approximately 1.0°. The average preparation time was 17 minutes. These results validated the accuracy and feasibility of the automatic tooth preparation technique.
An automatic tooth preparation technique: A preliminary study.
Yuan, Fusong; Wang, Yong; Zhang, Yaopeng; Sun, Yuchun; Wang, Dangxiao; Lyu, Peijun
2016-04-29
The aim of this study is to validate the feasibility and accuracy of a new automatic tooth preparation technique in dental healthcare. An automatic tooth preparation robotic device with three-dimensional motion planning software was developed, which controlled an ultra-short pulse laser (USPL) beam (wavelength 1,064 nm, pulse width 15 ps, output power 30 W, and repeat frequency rate 100 kHz) to complete the tooth preparation process. A total of 15 freshly extracted human intact first molars were collected and fixed into a phantom head, and the target preparation shapes of these molars were designed using customised computer-aided design (CAD) software. The accuracy of tooth preparation was evaluated using the Geomagic Studio and Imageware software, and the preparing time of each tooth was recorded. Compared with the target preparation shape, the average shape error of the 15 prepared molars was 0.05-0.17 mm, the preparation depth error of the occlusal surface was approximately 0.097 mm, and the error of the convergence angle was approximately 1.0°. The average preparation time was 17 minutes. These results validated the accuracy and feasibility of the automatic tooth preparation technique.
Novel neural control for a class of uncertain pure-feedback systems.
Shen, Qikun; Shi, Peng; Zhang, Tianping; Lim, Cheng-Chew
2014-04-01
This paper is concerned with the problem of adaptive neural tracking control for a class of uncertain pure-feedback nonlinear systems. Using the implicit function theorem and backstepping technique, a practical robust adaptive neural control scheme is proposed to guarantee that the tracking error converges to an adjusted neighborhood of the origin by choosing appropriate design parameters. In contrast to conventional Lyapunov-based design techniques, an alternative Lyapunov function is constructed for the development of control law and learning algorithms. Differing from the existing results in the literature, the control scheme does not need to compute the derivatives of virtual control signals at each step in backstepping design procedures. Furthermore, the scheme requires the desired trajectory and its first derivative rather than its first n derivatives. In addition, the useful property of the basis function of the radial basis function, which will be used in control design, is explored. Simulation results illustrate the effectiveness of the proposed techniques.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
On-Error Training (Book Excerpt).
ERIC Educational Resources Information Center
Fukuda, Ryuji
1985-01-01
This excerpt from "Managerial Engineering: Techniques for Improving Quality and Productivity in the Workplace" describes the development, objectives, and use of On-Error Training (OET), a method which trains workers to learn from their errors. Also described is New Joharry's Window, a performance-error data analysis technique used in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somayaji, Anil B.; Amai, Wendy A.; Walther, Eleanor A.
This reports describes the successful extension of artificial immune systems from the domain of computer security to the domain of real time control systems for robotic vehicles. A biologically-inspired computer immune system was added to the control system of two different mobile robots. As an additional layer in a multi-layered approach, the immune system is complementary to traditional error detection and error handling techniques. This can be thought of as biologically-inspired defense in depth. We demonstrated an immune system can be added with very little application developer effort, resulting in little to no performance impact. The methods described here aremore » extensible to any system that processes a sequence of data through a software interface.« less
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.
1978-01-01
An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.
Li, Zhijun; Su, Chun-Yi
2013-09-01
In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.; Remington, Paul J.; Walker, Bruce E.
2003-01-01
A test program to demonstrate simplification of Active Noise Control (ANC) systems relative to standard techniques was performed on the NASA Glenn Active Noise Control Fan from May through September 2001. The target mode was the m = 2 circumferential mode generated by the rotor-stator interaction at 2BPF. Seven radials (combined inlet and exhaust) were present at this condition. Several different error-sensing strategies were implemented. Integration of the error-sensors with passive treatment was investigated. These were: (i) an in-duct linear axial array, (ii) an induct steering array, (iii) a pylon-mounted array, and (iv) a near-field boom array. The effect of incorporating passive treatment was investigated as well as reducing the actuator count. These simplified systems were compared to a fully ANC specified system. Modal data acquired using the Rotating Rake are presented for a range of corrected fan rpm. Simplified control has been demonstrated to be possible but requires a well-known and dominant mode signature. The documented results here in are part III of a three-part series of reports with the same base title. Part I and II document the control system and error-sensing design and implementation.
NASA Astrophysics Data System (ADS)
Zubarev, A. E.; Nadezhdina, I. E.; Brusnikin, E. S.; Karachevtseva, I. P.; Oberst, J.
2016-09-01
The new technique for generation of coordinate control point networks based on photogrammetric processing of heterogeneous planetary images (obtained at different time, scale, with different illumination or oblique view) is developed. The technique is verified with the example for processing the heterogeneous information obtained by remote sensing of Ganymede by the spacecraft Voyager-1, -2 and Galileo. Using this technique the first 3D control point network for Ganymede is formed: the error of the altitude coordinates obtained as a result of adjustment is less than 5 km. The new control point network makes it possible to obtain basic geodesic parameters of the body (axes size) and to estimate forced librations. On the basis of the control point network, digital terrain models (DTMs) with different resolutions are generated and used for mapping the surface of Ganymede with different levels of detail (Zubarev et al., 2015b).
NASA Astrophysics Data System (ADS)
Zait, Eitan; Ben-Zvi, Guy; Dmitriev, Vladimir; Oshemkov, Sergey; Pforr, Rainer; Hennig, Mario
2006-05-01
Intra-field CD variation is, besides OPC errors, a main contributor to the total CD variation budget in IC manufacturing. It is caused mainly by mask CD errors. In advanced memory device manufacturing the minimum features are close to the resolution limit resulting in large mask error enhancement factors hence large intra-field CD variations. Consequently tight CD Control (CDC) of the mask features is required, which results in increasing significantly the cost of mask and hence the litho process costs. Alternatively there is a search for such techniques (1) which will allow improving the intrafield CD control for a given moderate mask and scanner imaging performance. Currently a new technique (2) has been proposed which is based on correcting the printed CD by applying shading elements generated in the substrate bulk of the mask by ultrashort pulsed laser exposure. The blank transmittance across a feature is controlled by changing the density of light scattering pixels. The technique has been demonstrated to be very successful in correcting intra-field CD variations caused by the mask and the projection system (2). A key application criterion of this technique in device manufacturing is the stability of the absorbing pixels against DUV light irradiation being applied during mask projection in scanners. This paper describes the procedures and results of such an investigation. To do it with acceptable effort a special experimental setup has been chosen allowing an evaluation within reasonable time. A 193nm excimer laser with pulse duration of 25 ns has been used for blank irradiation. Accumulated dose equivalent to 100,000 300 mm wafer exposures has been applied to Half Tone PSM mask areas with and without CDC shadowing elements. This allows the discrimination of effects appearing in treated and untreated glass regions. Several intensities have been investigated to define an acceptable threshold intensity to avoid glass compaction or generation of color centers in the glass. The impact of the irradiation on the mask transmittance of both areas has been studied by measurements of the printed CD on wafer using a wafer scanner before and after DUV irradiation.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Mekid, Samir; Vacharanukul, Ketsaya
2006-01-01
To achieve dynamic error compensation in CNC machine tools, a non-contact laser probe capable of dimensional measurement of a workpiece while it is being machined has been developed and presented in this paper. The measurements are automatically fed back to the machine controller for intelligent error compensations. Based on a well resolved laser Doppler technique and real time data acquisition, the probe delivers a very promising dimensional accuracy at few microns over a range of 100 mm. The developed optical measuring apparatus employs a differential laser Doppler arrangement allowing acquisition of information from the workpiece surface. In addition, the measurements are traceable to standards of frequency allowing higher precision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
Improving estimates of streamflow characteristics by using Landsat-1 imagery
Hollyday, Este F.
1976-01-01
Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.
Development of an evolutionary simulator and an overall control system for intelligent wheelchair
NASA Astrophysics Data System (ADS)
Imai, Makoto; Kawato, Koji; Hamagami, Tomoki; Hirata, Hironori
The goal of this research is to develop an intelligent wheelchair (IWC) system which aids an indoor safe mobility for elderly and disabled people with a new conceptual architecture which realizes autonomy, cooperativeness, and a collaboration behavior. In order to develop the IWC system in real environment, we need design-tools and flexible architecture. In particular, as more significant ones, this paper describes two key techniques which are an evolutionary simulation and an overall control mechanism. The evolutionary simulation technique corrects the error between the virtual environment in a simulator and real one in during the learning of an IWC agent, and coevolves with the agent. The overall control mechanism is implemented with subsumption architecture which is employed in an autonomous robot controller. By using these techniques in both simulations and experiments, we confirm that our IWC system acquires autonomy, cooperativeness, and a collaboration behavior efficiently.
A linear shift-invariant image preprocessing technique for multispectral scanner systems
NASA Technical Reports Server (NTRS)
Mcgillem, C. D.; Riemer, T. E.
1973-01-01
A linear shift-invariant image preprocessing technique is examined which requires no specific knowledge of any parameter of the original image and which is sufficiently general to allow the effective radius of the composite imaging system to be arbitrarily shaped and reduced, subject primarily to the noise power constraint. In addition, the size of the point-spread function of the preprocessing filter can be arbitrarily controlled, thus minimizing truncation errors.
Simultaneous calibrations of Voyager celestial and inertial attitude control systems in flight
NASA Technical Reports Server (NTRS)
Jahanshahi, M. H.
1982-01-01
A mathematical description of the data reduction technique used to simultaneously calibrate the Voyager celestial and inertial attitude control subsystems is given. It is shown that knowledge of the spacecraft limit cycle motion, as measured by the celestial and the inertial sensors, is adequate to result in the estimates of a selected number of errors which adversely affect the spacecraft attitude knowledge.
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.
1989-01-01
Control systems for advanced aircraft, especially those with relaxed static stability, will be critical to flight and will, therefore, have very high reliability specifications which must be met for adverse as well as nominal operating conditions. Adverse conditions can result from electromagnetic disturbances caused by lightning, high energy radio frequency transmitters, and nuclear electromagnetic pulses. Tools and techniques must be developed to verify the integrity of the control system in adverse operating conditions. The most difficult and illusive perturbations to computer based control systems caused by an electromagnetic environment (EME) are functional error modes that involve no component damage. These error modes are collectively known as upset, can occur simultaneously in all of the channels of a redundant control system, and are software dependent. A methodology is presented for performing upset tests on a multichannel control system and considerations are discussed for the design of upset tests to be conducted in the lab on fault tolerant control systems operating in a closed loop with a simulated plant.
Induction motor speed control using varied duty cycle terminal voltage via PI controller
NASA Astrophysics Data System (ADS)
Azwin, A.; Ahmed, S.
2018-03-01
This paper deals with the PI speed controller for the three-phase induction motor using PWM technique. The PWM generated signal is utilized for voltage source inverter with an optimal duty cycle on a simplified induction motor model. A control algorithm for generating PWM control signal is developed. Obtained results shows that the steady state error and overshoot of the developed system is in the limit under different speed and load condition. The robustness of the control performance would be potential for induction motor performance improvement.
Robust Control Analysis of Hydraulic Turbine Speed
NASA Astrophysics Data System (ADS)
Jekan, P.; Subramani, C.
2018-04-01
An effective control strategy for the hydro-turbine governor in time scenario is adjective for this paper. Considering the complex dynamic characteristic and the uncertainty of the hydro-turbine governor model and taking the static and dynamic performance of the governing system as the ultimate goal, the designed logic combined the classical PID control theory with artificial intelligence used to obtain the desired output. The used controller will be a variable control techniques, therefore, its parameters can be adaptively adjusted according to the information about the control error signal.
Realizable optimal control for a remotely piloted research vehicle. [stability augmentation
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1980-01-01
The design of a control system using the linear-quadratic regulator (LQR) control law theory for time invariant systems in conjunction with an incremental gradient procedure is presented. The incremental gradient technique reduces the full-state feedback controller design, generated by the LQR algorithm, to a realizable design. With a realizable controller, the feedback gains are based only on the available system outputs instead of being based on the full-state outputs. The design is for a remotely piloted research vehicle (RPRV) stability augmentation system. The design includes methods for accounting for noisy measurements, discrete controls with zero-order-hold outputs, and computational delay errors. Results from simulation studies of the response of the RPRV to a step in the elevator and frequency analysis techniques are included to illustrate these abnormalities and their influence on the controller design.
Ariyama, Kaoru; Kadokura, Masashi; Suzuki, Tadanao
2008-01-01
Techniques to determine the geographic origin of foods have been developed for various agricultural and fishery products, and they have used various principles. Some of these techniques are already in use for checking the authenticity of the labeling. Many are based on multielement analysis and chemometrics. We have developed such a technique to determine the geographic origin of onions (Allium cepa L.). This technique, which determines whether an onion is from outside Japan, is designed for onions labeled as having a geographic origin of Hokkaido, Hyogo, or Saga, the main onion production areas in Japan. However, estimations of discrimination errors for this technique have not been fully conducted; they have been limited to those for discrimination models and do not include analytical errors. Interlaboratory studies were conducted to estimate the analytical errors of the technique. Four collaborators each determined 11 elements (Na, Mg, P, Mn, Zn, Rb, Sr, Mo, Cd, Cs, and Ba) in 4 test materials of fresh and dried onions. Discrimination errors in this technique were estimated by summing (1) individual differences within lots, (2) variations between lots from the same production area, and (3) analytical errors. The discrimination errors for onions from Hokkaido, Hyogo, and Saga were estimated to be 2.3, 9.5, and 8.0%, respectively. Those for onions from abroad in determinations targeting Hokkaido, Hyogo, and Saga were estimated to be 28.2, 21.6, and 21.9%, respectively.
Loop shaping design for tracking performance in machine axes.
Schinstock, Dale E; Wei, Zhouhong; Yang, Tao
2006-01-01
A modern interpretation of classical loop shaping control design methods is presented in the context of tracking control for linear motor stages. Target applications include noncontacting machines such as laser cutters and markers, water jet cutters, and adhesive applicators. The methods are directly applicable to the common PID controller and are pertinent to many electromechanical servo actuators other than linear motors. In addition to explicit design techniques a PID tuning algorithm stressing the importance of tracking is described. While the theory behind these techniques is not new, the analysis of their application to modern systems is unique in the research literature. The techniques and results should be important to control practitioners optimizing PID controller designs for tracking and in comparing results from classical designs to modern techniques. The methods stress high-gain controller design and interpret what this means for PID. Nothing in the methods presented precludes the addition of feedforward control methods for added improvements in tracking. Laboratory results from a linear motor stage demonstrate that with large open-loop gain very good tracking performance can be achieved. The resultant tracking errors compare very favorably to results from similar motions on similar systems that utilize much more complicated controllers.
MRAC Control with Prior Model Knowledge for Asymmetric Damaged Aircraft
Zhang, Jing
2015-01-01
This paper develops a novel state-tracking multivariable model reference adaptive control (MRAC) technique utilizing prior knowledge of plant models to recover control performance of an asymmetric structural damaged aircraft. A modification of linear model representation is given. With prior knowledge on structural damage, a polytope linear parameter varying (LPV) model is derived to cover all concerned damage conditions. An MRAC method is developed for the polytope model, of which the stability and asymptotic error convergence are theoretically proved. The proposed technique reduces the number of parameters to be adapted and thus decreases computational cost and requires less input information. The method is validated by simulations on NASA generic transport model (GTM) with damage. PMID:26180839
Dynamic learning from adaptive neural network control of a class of nonaffine nonlinear systems.
Dai, Shi-Lu; Wang, Cong; Wang, Min
2014-01-01
This paper studies the problem of learning from adaptive neural network (NN) control of a class of nonaffine nonlinear systems in uncertain dynamic environments. In the control design process, a stable adaptive NN tracking control design technique is proposed for the nonaffine nonlinear systems with a mild assumption by combining a filtered tracking error with the implicit function theorem, input-to-state stability, and the small-gain theorem. The proposed stable control design technique not only overcomes the difficulty in controlling nonaffine nonlinear systems but also relaxes constraint conditions of the considered systems. In the learning process, the partial persistent excitation (PE) condition of radial basis function NNs is satisfied during tracking control to a recurrent reference trajectory. Under the PE condition and an appropriate state transformation, the proposed adaptive NN control is shown to be capable of acquiring knowledge on the implicit desired control input dynamics in the stable control process and of storing the learned knowledge in memory. Subsequently, an NN learning control design technique that effectively exploits the learned knowledge without re-adapting to the controller parameters is proposed to achieve closed-loop stability and improved control performance. Simulation studies are performed to demonstrate the effectiveness of the proposed design techniques.
Accurate fluid force measurement based on control surface integration
NASA Astrophysics Data System (ADS)
Lentink, David
2018-01-01
Nonintrusive 3D fluid force measurements are still challenging to conduct accurately for freely moving animals, vehicles, and deforming objects. Two techniques, 3D particle image velocimetry (PIV) and a new technique, the aerodynamic force platform (AFP), address this. Both rely on the control volume integral for momentum; whereas PIV requires numerical integration of flow fields, the AFP performs the integration mechanically based on rigid walls that form the control surface. The accuracy of both PIV and AFP measurements based on the control surface integration is thought to hinge on determining the unsteady body force associated with the acceleration of the volume of displaced fluid. Here, I introduce a set of non-dimensional error ratios to show which fluid and body parameters make the error negligible. The unsteady body force is insignificant in all conditions where the average density of the body is much greater than the density of the fluid, e.g., in gas. Whenever a strongly deforming body experiences significant buoyancy and acceleration, the error is significant. Remarkably, this error can be entirely corrected for with an exact factor provided that the body has a sufficiently homogenous density or acceleration distribution, which is common in liquids. The correction factor for omitting the unsteady body force, {{{ {ρ f}} {1 - {ρ f} ( {{ρ b}+{ρ f}} )}.{( {{{{ρ }}b}+{ρ f}} )}}} , depends only on the fluid, {ρ f}, and body, {{ρ }}b, density. Whereas these straightforward solutions work even at the liquid-gas interface in a significant number of cases, they do not work for generalized bodies undergoing buoyancy in combination with appreciable body density inhomogeneity, volume change (PIV), or volume rate-of-change (PIV and AFP). In these less common cases, the 3D body shape needs to be measured and resolved in time and space to estimate the unsteady body force. The analysis shows that accounting for the unsteady body force is straightforward to non-intrusively and accurately determine fluid force in most applications.
Self-Calibration Approach for Mixed Signal Circuits in Systems-on-Chip
NASA Astrophysics Data System (ADS)
Jung, In-Seok
MOSFET scaling has served industry very well for a few decades by proving improvements in transistor performance, power, and cost. However, they require high test complexity and cost due to several issues such as limited pin count and integration of analog and digital mixed circuits. Therefore, self-calibration is an excellent and promising method to improve yield and to reduce manufacturing cost by simplifying the test complexity, because it is possible to address the process variation effects by means of self-calibration technique. Since the prior published calibration techniques were developed for a specific targeted application, it is not easy to be utilized for other applications. In order to solve the aforementioned issues, in this dissertation, several novel self-calibration design techniques in mixed-signal mode circuits are proposed for an analog to digital converter (ADC) to reduce mismatch error and improve performance. These are essential components in SOCs and the proposed self-calibration approach also compensates the process variations. The proposed novel self-calibration approach targets the successive approximation (SA) ADC. First of all, the offset error of the comparator in the SA-ADC is reduced using the proposed approach by enabling the capacitor array in the input nodes for better matching. In addition, the auxiliary capacitors for each capacitor of DAC in the SA-ADC are controlled by using synthesized digital controller to minimize the mismatch error of the DAC. Since the proposed technique is applied during foreground operation, the power overhead in SA-ADC case is minimal because the calibration circuit is deactivated during normal operation time. Another benefit of the proposed technique is that the offset voltage of the comparator is continuously adjusted for every step to decide one-bit code, because not only the inherit offset voltage of the comparator but also the mismatch of DAC are compensated simultaneously. Synthesized digital calibration control circuit operates as fore-ground mode, and the controller has been highly optimized for low power and better performance with simplified structure. In addition, in order to increase the sampling clock frequency of proposed self-calibration approach, novel variable clock period method is proposed. To achieve high speed SAR operation, a variable clock time technique is used to reduce not only peak current but also die area. The technique removes conversion time waste and extends the SAR operation speed easily. To verify and demonstrate the proposed techniques, a prototype charge-redistribution SA-ADCs with the proposed self-calibration is implemented in a 130nm standard CMOS process. The prototype circuit's silicon area is 0.0715 mm 2 and consumers 4.62mW with 1.2V power supply.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali
1997-01-01
Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.
Errors in the Extra-Analytical Phases of Clinical Chemistry Laboratory Testing.
Zemlin, Annalise E
2018-04-01
The total testing process consists of various phases from the pre-preanalytical to the post-postanalytical phase, the so-called brain-to-brain loop. With improvements in analytical techniques and efficient quality control programmes, most laboratory errors now occur in the extra-analytical phases. There has been recent interest in these errors with numerous publications highlighting their effect on service delivery, patient care and cost. This interest has led to the formation of various working groups whose mission is to develop standardized quality indicators which can be used to measure the performance of service of these phases. This will eventually lead to the development of external quality assessment schemes to monitor these phases in agreement with ISO15189:2012 recommendations. This review focuses on potential errors in the extra-analytical phases of clinical chemistry laboratory testing, some of the studies performed to assess the severity and impact of these errors and processes that are in place to address these errors. The aim of this review is to highlight the importance of these errors for the requesting clinician.
NASA Astrophysics Data System (ADS)
Ewan, B. C. R.; Ireland, S. N.
2000-12-01
Acoustic pyrometry uses the temperature dependence of sound speed in materials to measure temperature. This is normally achieved by measuring the transit time for a sound signal over a known path length and applying the material relation between temperature and velocity to extract an "average" temperature. Sources of error associated with the measurement of mean transit time are discussed in implementing the technique in gases, one of the principal causes being background noise in typical industrial environments. A number of transmitted signal and processing strategies which can be used in the area are examined and the expected error in mean transit time associated with each technique is quantified. Transmitted signals included pulses, pure frequencies, chirps, and pseudorandom binary sequences (prbs), while processing involves edge detection and correlation. Errors arise through the misinterpretation of the positions of edge arrival or correlation peaks due to instantaneous deviations associated with background noise and these become more severe as signal to noise amplitude ratios decrease. Population errors in the mean transit time are estimated for the different measurement strategies and it is concluded that PRBS combined with correlation can provide the lowest errors when operating in high noise environments. The operation of an instrument based on PRBS transmitted signals is described and test results under controlled noise conditions are presented. These confirm the value of the strategy and demonstrate that measurements can be made with signal to noise amplitude ratios down to 0.5.
NASA Astrophysics Data System (ADS)
Hoang, Phong V.; Konyakhin, Igor A.
2017-06-01
Autocollimators are widely used for angular measurements in instrument-making and the manufacture of elements of optical systems (wedges, prisms, plane-parallel plates) to check their shape parameters (rectilinearity, parallelism and planarity) and retrieve their optical parameters (curvature radii, measure and test their flange focusing). Autocollimator efficiency is due to the high sensitivity of the autocollimation method to minor rotations of the reflecting control element or the controlled surface itself. We consider using quaternions to optimize reflector parameters during autocollimation measurements as compared to the matrix technique. Mathematical model studies have demonstrated that the orthogonal positioning of the two basic unchanged directions of the tetrahedral reflector of the autocollimator is optimal by the criterion of reducing measurement errors where the axis of actual rotation is in a bisecting position towards them. Computer results are presented of running quaternion models that yielded conditions for diminishing measurement errors provided apriori information is available on the position of rotation axis. A practical technique is considered for synthesizing the parameters of the tetrahedral reflector that employs the newly-retrieved relationships. Following the relationships found between the angles of the tetrahedral reflector and the angles of the parameters of its initial orientation, an applied technique was developed to synthesize the control element for autocollimation measurements in case apriori information is available on the axis of actual rotation during monitoring measurements of shaft or pipeline deformation.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
ERIC Educational Resources Information Center
Vanderslice, Ralph
The technique of "voiceprint identification" has been invested with a myth of infallibility, largely by means of a specious analogy with fingerprints. The refusal of its chief proponent to submit to a properly controlled test of his ability, coupled with the inability of observers in independent studies to get comparably low error rates,…
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1988-01-01
During the period December 1, 1987 through May 31, 1988, progress was made in the following areas: construction of Multi-Dimensional Bandwidth Efficient Trellis Codes with MPSK modulation; performance analysis of Bandwidth Efficient Trellis Coded Modulation schemes; and performance analysis of Bandwidth Efficient Trellis Codes on Fading Channels.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Lin, S.
1984-01-01
Several error control coding techniques for reliable satellite communications were investigated to find algorithms for fast decoding of Reed-Solomon codes in terms of dual basis. The decoding of the (255,223) Reed-Solomon code, which is used as the outer code in the concatenated TDRSS decoder, was of particular concern.
Error assessment of local tie vectors in space geodesy
NASA Astrophysics Data System (ADS)
Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald
2014-05-01
For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.
Slew maneuvers on the SCOLE Laboratory Facility
NASA Technical Reports Server (NTRS)
Williams, Jeffrey P.
1987-01-01
The Spacecraft Control Laboratory Experiment (SCOLE) was conceived to provide a physical test bed for the investigation of control techniques for large flexible spacecraft. The control problems studied are slewing maneuvers and pointing operations. The slew is defined as a minimum time maneuver to bring the antenna line-of-sight (LOS) pointing to within an error limit of the pointing target. The second objective is to rotate about the LOS within the 0.02 degree error limit. The SCOLE problem is defined as two design challenges: control laws for a mathematical model of a large antenna attached to the Space Shuttle by a long flexible mast; and a control scheme on a laboratory representation of the structure modelled on the control laws. Control sensors and actuators are typical of those which the control designer would have to deal with on an actual spacecraft. Computational facilities consist of microcomputer based central processing units with appropriate analog interfaces for implementation of the primary control system, and the attitude estimation algorithm. Preliminary results of some slewing control experiments are given.
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Design of an MSAT-X mobile transceiver and related base and gateway stations
NASA Technical Reports Server (NTRS)
Fang, Russell J. F.; Bhaskar, Udaya; Hemmati, Farhad; Mackenthun, Kenneth M.; Shenoy, Ajit
1987-01-01
This paper summarizes the results of a design study of the mobile transceiver, base station, and gateway station for NASA's proposed Mobile Satellite Experiment (MSAT-X). Major ground segment system design issues such as frequency stability control, modulation method, linear predictive coding vocoder algorithm, and error control technique are addressed. The modular and flexible transceiver design is described in detail, including the core, RF/IF, modem, vocoder, forward error correction codec, amplitude-companded single sideband, and input/output modules, as well as the flexible interface. Designs for a three-carrier base station and a 10-carrier gateway station are also discussed, including the interface with the controllers and with the public-switched telephone networks at the gateway station. Functional specifications are given for the transceiver, the base station, and the gateway station.
Design of an MSAT-X mobile transceiver and related base and gateway stations
NASA Astrophysics Data System (ADS)
Fang, Russell J. F.; Bhaskar, Udaya; Hemmati, Farhad; Mackenthun, Kenneth M.; Shenoy, Ajit
This paper summarizes the results of a design study of the mobile transceiver, base station, and gateway station for NASA's proposed Mobile Satellite Experiment (MSAT-X). Major ground segment system design issues such as frequency stability control, modulation method, linear predictive coding vocoder algorithm, and error control technique are addressed. The modular and flexible transceiver design is described in detail, including the core, RF/IF, modem, vocoder, forward error correction codec, amplitude-companded single sideband, and input/output modules, as well as the flexible interface. Designs for a three-carrier base station and a 10-carrier gateway station are also discussed, including the interface with the controllers and with the public-switched telephone networks at the gateway station. Functional specifications are given for the transceiver, the base station, and the gateway station.
Self-referenced locking of optical coherence by single-detector electronic-frequency tagging
NASA Astrophysics Data System (ADS)
Shay, T. M.; Benham, Vincent; Spring, Justin; Ward, Benjamin; Ghebremichael, F.; Culpepper, Mark A.; Sanchez, Anthony D.; Baker, J. T.; Pilkington, D.; Berdine, Richard
2006-02-01
We report a novel coherent beam combining technique. This is the first actively phase locked optical fiber array that eliminates the need for a separate reference beam. In addition, only a single photodetector is required. The far-field central spot of the array is imaged onto the photodetector to produce the phase control loop signals. Each leg of the fiber array is phase modulated with a separate RF frequency, thus tagging the optical phase shift for each leg by a separate RF frequency. The optical phase errors for the individual array legs are separated in the electronic domain. In contrast with the previous active phase locking techniques, in our system the reference beam is spatially overlapped with all the RF modulated fiber leg beams onto a single detector. The phase shift between the optical wave in the reference leg and in the RF modulated legs is measured separately in the electronic domain and the phase error signal is feedback to the LiNbO 3 phase modulator for that leg to minimize the phase error for that leg relative to the reference leg. The advantages of this technique are 1) the elimination of the reference beam and beam combination optics and 2) the electronic separation of the phase error signals without any degradation of the phase locking accuracy. We will present the first theoretical model for self-referenced LOCSET and describe experimental results for a 3 x 3 array.
Head-mounted active noise control system with virtual sensing technique
NASA Astrophysics Data System (ADS)
Miyazaki, Nobuhiro; Kajikawa, Yoshinobu
2015-03-01
In this paper, we apply a virtual sensing technique to a head-mounted active noise control (ANC) system we have already proposed. The proposed ANC system can reduce narrowband noise while improving the noise reduction ability at the desired locations. A head-mounted ANC system based on an adaptive feedback structure can reduce noise with periodicity or narrowband components. However, since quiet zones are formed only at the locations of error microphones, an adequate noise reduction cannot be achieved at the locations where error microphones cannot be placed such as near the eardrums. A solution to this problem is to apply a virtual sensing technique. A virtual sensing ANC system can achieve higher noise reduction at the desired locations by measuring the system models from physical sensors to virtual sensors, which will be used in the online operation of the virtual sensing ANC algorithm. Hence, we attempt to achieve the maximum noise reduction near the eardrums by applying the virtual sensing technique to the head-mounted ANC system. However, it is impossible to place the microphone near the eardrums. Therefore, the system models from physical sensors to virtual sensors are estimated using the Head And Torso Simulator (HATS) instead of human ears. Some simulation, experimental, and subjective assessment results demonstrate that the head-mounted ANC system with virtual sensing is superior to that without virtual sensing in terms of the noise reduction ability at the desired locations.
Automated vehicle guidance using discrete reference markers. [road surface steering techniques
NASA Technical Reports Server (NTRS)
Johnston, A. R.; Assefi, T.; Lai, J. Y.
1979-01-01
Techniques for providing steering control for an automated vehicle using discrete reference markers fixed to the road surface are investigated analytically. Either optical or magnetic approaches can be used for the sensor, which generates a measurement of the lateral offset of the vehicle path at each marker to form the basic data for steering control. Possible mechanizations of sensor and controller are outlined. Techniques for handling certain anomalous conditions, such as a missing marker, or loss of acquisition, and special maneuvers, such as u-turns and switching, are briefly discussed. A general analysis of the vehicle dynamics and the discrete control system is presented using the state variable formulation. Noise in both the sensor measurement and in the steering servo are accounted for. An optimal controller is simulated on a general purpose computer, and the resulting plots of vehicle path are presented. Parameters representing a small multipassenger tram were selected, and the simulation runs show response to an erroneous sensor measurement and acquisition following large initial path errors.
Comparison of frequency-domain and time-domain rotorcraft vibration control methods
NASA Technical Reports Server (NTRS)
Gupta, N. K.
1984-01-01
Active control of rotor-induced vibration in rotorcraft has received significant attention recently. Two classes of techniques have been proposed. The more developed approach works with harmonic analysis of measured time histories and is called the frequency-domain approach. The more recent approach computes the control input directly using the measured time history data and is called the time-domain approach. The report summarizes the results of a theoretical investigation to compare the two approaches. Five specific areas were addressed: (1) techniques to derive models needed for control design (system identification methods), (2) robustness with respect to errors, (3) transient response, (4) susceptibility to noise, and (5) implementation difficulties. The system identification methods are more difficult for the time-domain models. The time-domain approach is more robust (e.g., has higher gain and phase margins) than the frequency-domain approach. It might thus be possible to avoid doing real-time system identification in the time-domain approach by storing models at a number of flight conditions. The most significant error source is the variation in open-loop vibrations caused by pilot inputs, maneuvers or gusts. The implementation requirements are similar except that the time-domain approach can be much simpler to implement if real-time system identification were not necessary.
NASA Astrophysics Data System (ADS)
Tan, Ying; Dai, Daoxin
2018-05-01
Silicon microring resonators (MRRs) are very popular for many applications because of the advantages of footprint compactness, easy scalability, and functional versatility. Ultra-compact silicon MRRs with box-like spectral responses are realized with a very large free-spectral range (FSR) by introducing bent directional couplers. The measured box-like spectral response has an FSR of >30 nm. The permanent wavelength-alignment techniques for MRRs are also presented, including the laser-induced local-oxidation technique as well as the local-etching technique. With these techniques, one can control finely the permanent wavelength shift, which is also large enough to compensate the random wavelength variation due to the random fabrication errors.
Deflectometry using portable devices
NASA Astrophysics Data System (ADS)
Butel, Guillaume P.; Smith, Greg A.; Burge, James H.
2015-02-01
Deflectometry is a powerful metrology technique that uses off-the-shelf equipment to achieve nanometer-level accuracy surface measurements. However, there is no portable device to quickly measure eyeglasses, lenses, or mirrors. We present an entirely portable new deflectometry technique that runs on any Android™ smartphone with a front-facing camera. Our technique overcomes some specific issues of portable devices like screen nonlinearity and automatic gain control. We demonstrate our application by measuring an amateur telescope mirror and simulating a measurement of the faulty Hubble Space Telescope primary mirror. Our technique can, in less than 1 min, measure surface errors with accuracy up to 50 nm RMS, simply using a smartphone.
Warrick, Jonathan; Ritchie, Andy; Adelman, Gabrielle; Adelman, Ken; Limber, Patrick W.
2017-01-01
Oblique aerial photograph surveys are commonly used to document coastal landscapes. Here it is shown that adequate overlap may exist in these photographic records to develop topographic models with Structure-from-Motion (SfM) photogrammetric techniques. Using photographs of Fort Funston, California, from the California Coastal Records Project, imagery were combined with ground control points in a four-dimensional analysis that produced topographic point clouds of the study area’s cliffs for 5 years spanning 2002 to 2010. Uncertainty was assessed by comparing point clouds with airborne LIDAR data, and these uncertainties were related to the number and spatial distribution of ground control points used in the SfM analyses. With six or more ground control points, the root mean squared errors between the SfM and LIDAR data were less than 0.30 m (minimum 1⁄4 0.18 m), and the mean systematic error was less than 0.10 m. The SfM results had several benefits over traditional airborne LIDAR in that they included point coverage on vertical- to-overhanging sections of the cliff and resulted in 10–100 times greater point densities. Time series of the SfM results revealed topographic changes, including landslides, rock falls, and the erosion of landslide talus along the Fort Funston beach. Thus, it was concluded that SfM photogrammetric techniques with historical oblique photographs allow for the extraction of useful quantitative information for mapping coastal topography and measuring coastal change. The new techniques presented here are likely applicable to many photograph collections and problems in the earth sciences.
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2008-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2010-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
NASA Technical Reports Server (NTRS)
Rao, T. R. N.; Seetharaman, G.; Feng, G. L.
1996-01-01
With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.
Landing Technique and Performance in Youth Athletes After a Single Injury-Prevention Program Session
Root, Hayley; Trojian, Thomas; Martinez, Jessica; Kraemer, William; DiStefano, Lindsay J.
2015-01-01
Context Injury-prevention programs (IPPs) performed as season-long warm-ups improve injury rates, performance outcomes, and jump-landing technique. However, concerns regarding program adoption exist. Identifying the acute benefits of using an IPP compared with other warm-ups may encourage IPP adoption. Objective To examine the immediate effects of 3 warm-up protocols (IPP, static warm-up [SWU], or dynamic warm-up [DWU]) on jump-landing technique and performance measures in youth athletes. Design Randomized controlled clinical trial. Setting Gymnasiums. Patients or Other Participants Sixty male and 29 female athletes (age = 13 ± 2 years, height = 162.8 ± 12.6 cm, mass = 37.1 ± 13.5 kg) volunteered to participate in a single session. Intervention(s) Participants were stratified by age, sex, and sport and then were randomized into 1 protocol: IPP, SWU, or DWU. The IPP consisted of dynamic flexibility, strengthening, plyometric, and balance exercises and emphasized proper technique. The SWU consisted of jogging and lower extremity static stretching. The DWU consisted of dynamic lower extremity flexibility exercises. Participants were assessed for landing technique and performance measures immediately before (PRE) and after (POST) completing their warm-ups. Main Outcome Measure(s) One rater graded each jump-landing trial using the Landing Error Scoring System. Participants performed a vertical jump, long jump, shuttle run, and jump-landing task in randomized order. The averages of all jump-landing trials and performance variables were used to calculate 1 composite score for each variable at PRE and POST. Change scores were calculated (POST − PRE) for all measures. Separate 1-way (group) analyses of variance were conducted for each dependent variable (α < .05). Results No differences were observed among groups for any performance measures (P > .05). The Landing Error Scoring System scores improved after the IPP (change = −0.40 ± 1.24 errors) compared with the DWU (0.27 ± 1.09 errors) and SWU (0.43 ± 1.35 errors; P = .04). Conclusions An IPP did not impair sport performance and may have reduced injury risk, which supports the use of these programs before sport activity. PMID:26523663
Intelligent Tracking Control for a Class of Uncertain High-Order Nonlinear Systems.
Zhao, Xudong; Shi, Peng; Zheng, Xiaolong; Zhang, Jianhua
2016-09-01
This brief is concerned with the problem of intelligent tracking control for a class of high-order nonlinear systems with completely unknown nonlinearities. An intelligent adaptive control algorithm is presented by combining the adaptive backstepping technique with the neural networks' approximation ability. It is shown that the practical output tracking performance of the system is achieved using the proposed state-feedback controller under two mild assumptions. In particular, by introducing a parameter in the derivations, the tracking error between the time-varying target signal and the output can be reduced via tuning the controller design parameters. Moreover, in order to solve the problem of overparameterization, which is a common issue in adaptive control design, a controller with one adaptive law is also designed. Finally, simulation results are given to show the effectiveness of the theoretical approaches and the potential of the proposed new design techniques.
Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu
2018-06-01
This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.
Make no mistake—errors can be controlled*
Hinckley, C
2003-01-01
Traditional quality control methods identify "variation" as the enemy. However, the control of variation by itself can never achieve the remarkably low non-conformance rates of world class quality leaders. Because the control of variation does not achieve the highest levels of quality, an inordinate focus on these techniques obscures key quality improvement opportunities and results in unnecessary pain and suffering for patients, and embarrassment, litigation, and loss of revenue for healthcare providers. Recent experience has shown that mistakes are the most common cause of problems in health care as well as in other industrial environments. Excessive product and process complexity contributes to both excessive variation and unnecessary mistakes. The best methods for controlling variation, mistakes, and complexity are each a form of mistake proofing. Using these mistake proofing techniques, virtually every mistake and non-conformance can be controlled at a fraction of the cost of traditional quality control methods. PMID:14532368
Fuzzy Adaptive Control Design and Discretization for a Class of Nonlinear Uncertain Systems.
Zhao, Xudong; Shi, Peng; Zheng, Xiaolong
2016-06-01
In this paper, tracking control problems are investigated for a class of uncertain nonlinear systems in lower triangular form. First, a state-feedback controller is designed by using adaptive backstepping technique and the universal approximation ability of fuzzy logic systems. During the design procedure, a developed method with less computation is proposed by constructing one maximum adaptive parameter. Furthermore, adaptive controllers with nonsymmetric dead-zone are also designed for the systems. Then, a sampled-data control scheme is presented to discretize the obtained continuous-time controller by using the forward Euler method. It is shown that both proposed continuous and discrete controllers can ensure that the system output tracks the target signal with a small bounded error and the other closed-loop signals remain bounded. Two simulation examples are presented to verify the effectiveness and applicability of the proposed new design techniques.
How scientific experiments are designed: Problem solving in a knowledge-rich, error-rich environment
NASA Astrophysics Data System (ADS)
Baker, Lisa M.
While theory formation and the relation between theory and data has been investigated in many studies of scientific reasoning, researchers have focused less attention on reasoning about experimental design, even though the experimental design process makes up a large part of real-world scientists' reasoning. The goal of this thesis was to provide a cognitive account of the scientific experimental design process by analyzing experimental design as problem-solving behavior (Newell & Simon, 1972). Three specific issues were addressed: the effect of potential error on experimental design strategies, the role of prior knowledge in experimental design, and the effect of characteristics of the space of alternate hypotheses on alternate hypothesis testing. A two-pronged in vivo/in vitro research methodology was employed, in which transcripts of real-world scientific laboratory meetings were analyzed as well as undergraduate science and non-science majors' design of biology experiments in the psychology laboratory. It was found that scientists use a specific strategy to deal with the possibility of error in experimental findings: they include "known" control conditions in their experimental designs both to determine whether error is occurring and to identify sources of error. The known controls strategy had not been reported in earlier studies with science-like tasks, in which participants' responses to error had consisted of replicating experiments and discounting results. With respect to prior knowledge: scientists and undergraduate students drew on several types of knowledge when designing experiments, including theoretical knowledge, domain-specific knowledge of experimental techniques, and domain-general knowledge of experimental design strategies. Finally, undergraduate science students generated and tested alternates to their favored hypotheses when the space of alternate hypotheses was constrained and searchable. This result may help explain findings of confirmation bias in earlier studies using science-like tasks, in which characteristics of the alternate hypothesis space may have made it unfeasible for participants to generate and test alternate hypotheses. In general, scientists and science undergraduates were found to engage in a systematic experimental design process that responded to salient features of the problem environment, including the constant potential for experimental error, availability of alternate hypotheses, and access to both theoretical knowledge and knowledge of experimental techniques.
Error Analysis System for Spacecraft Navigation Using the Global Positioning System (GPS)
NASA Technical Reports Server (NTRS)
Truong, S. H.; Hart, R. C.; Hartman, K. R.; Tomcsik, T. L.; Searl, J. E.; Bernstein, A.
1997-01-01
The Flight Dynamics Division (FDD) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) is currently developing improved space-navigation filtering algorithms to use the Global Positioning System (GPS) for autonomous real-time onboard orbit determination. In connection with a GPS technology demonstration on the Small Satellite Technology Initiative (SSTI)/Lewis spacecraft, FDD analysts and programmers have teamed with the GSFC Guidance, Navigation, and Control Branch to develop the GPS Enhanced Orbit Determination Experiment (GEODE) system. The GEODE system consists of a Kalman filter operating as a navigation tool for estimating the position, velocity, and additional states required to accurately navigate the orbiting Lewis spacecraft by using astrodynamic modeling and GPS measurements from the receiver. A parallel effort at the FDD is the development of a GPS Error Analysis System (GEAS) that will be used to analyze and improve navigation filtering algorithms during development phases and during in-flight calibration. For GEAS, the Kalman filter theory is extended to estimate the errors in position, velocity, and other error states of interest. The estimation of errors in physical variables at regular intervals will allow the time, cause, and effect of navigation system weaknesses to be identified. In addition, by modeling a sufficient set of navigation system errors, a system failure that causes an observed error anomaly can be traced and accounted for. The GEAS software is formulated using Object Oriented Design (OOD) techniques implemented in the C++ programming language on a Sun SPARC workstation. The Phase 1 of this effort is the development of a basic system to be used to evaluate navigation algorithms implemented in the GEODE system. This paper presents the GEAS mathematical methodology, systems and operations concepts, and software design and implementation. Results from the use of the basic system to evaluate navigation algorithms implemented on GEODE are also discussed. In addition, recommendations for generalization of GEAS functions and for new techniques to optimize the accuracy and control of the GPS autonomous onboard navigation are presented.
Some aspects of robotics calibration, design and control
NASA Technical Reports Server (NTRS)
Tawfik, Hazem
1990-01-01
The main objective is to introduce techniques in the areas of testing and calibration, design, and control of robotic systems. A statistical technique is described that analyzes a robot's performance and provides quantitative three-dimensional evaluation of its repeatability, accuracy, and linearity. Based on this analysis, a corrective action should be taken to compensate for any existing errors and enhance the robot's overall accuracy and performance. A comparison between robotics simulation software packages that were commercially available (SILMA, IGRIP) and that of Kennedy Space Center (ROBSIM) is also included. These computer codes simulate the kinematics and dynamics patterns of various robot arm geometries to help the design engineer in sizing and building the robot manipulator and control system. A brief discussion on an adaptive control algorithm is provided.
Fabrication of five-level ultraplanar micromirror arrays by flip-chip assembly
NASA Astrophysics Data System (ADS)
Michalicek, M. Adrian; Bright, Victor M.
2001-10-01
This paper reports a detailed study of the fabrication of various piston, torsion, and cantilever style micromirror arrays using a novel, simple, and inexpensive flip-chip assembly technique. Several rectangular and polar arrays were commercially prefabricated in the MUMPs process and then flip-chip bonded to form advanced micromirror arrays where adverse effects typically associated with surface micromachining were removed. These arrays were bonded by directly fusing the MUMPs gold layers with no complex preprocessing. The modules were assembled using a computer-controlled, custom-built flip-chip bonding machine. Topographically opposed bond pads were designed to correct for slight misalignment errors during bonding and typically result in less than 2 micrometers of lateral alignment error. Although flip-chip micromirror performance is briefly discussed, the means used to create these arrays is the focus of the paper. A detailed study of flip-chip process yield is presented which describes the primary failure mechanisms for flip-chip bonding. Studies of alignment tolerance, bonding force, stress concentration, module planarity, bonding machine calibration techniques, prefabrication errors, and release procedures are presented in relation to specific observations in process yield. Ultimately, the standard thermo-compression flip-chip assembly process remains a viable technique to develop highly complex prototypes of advanced micromirror arrays.
Intracavity adaptive optics. 1: Astigmatism correction performance.
Spinhirne, J M; Anafi, D; Freeman, R H; Garcia, H R
1981-03-15
A detailed experimental study has been conducted on adaptive optical control methodologies inside a laser resonator. A comparison is presented of several optimization techniques using a multidither zonal coherent optical adaptive technique system within a laser resonator for the correction of astigmatism. A dramatic performance difference is observed when optimizing on beam quality compared with optimizing on power-in-the-bucket. Experimental data are also presented on proper selection criteria for dither frequencies when controlling phase front errors. The effects of hardware limitations and design considerations on the performance of the system are presented, and general conclusions and physical interpretations on the results are made when possible.
Multi-interface Level Sensors and New Development in Monitoring and Control of Oil Separators
Bukhari, Syed Faisal Ahmed; Yang, Wuqiang
2006-01-01
In the oil industry, huge saving may be made if suitable multi-interface level measurement systems are employed for effectively monitoring crude oil separators and efficient control of their operation. A number of techniques, e.g. externally mounted displacers, differential pressure transmitters and capacitance rod devices, have been developed to measure the separation process with gas, oil, water and other components. Because of the unavailability of suitable multi-interface level measurement systems, oil separators are currently operated by the trial-and-error approach. In this paper some conventional techniques, which have been used for level measurement in industry, and new development are discussed.
Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B
2016-05-01
The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.
Apply network coding for H.264/SVC multicasting
NASA Astrophysics Data System (ADS)
Wang, Hui; Kuo, C.-C. Jay
2008-08-01
In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.
Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
1990-01-01
A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.
NASA Astrophysics Data System (ADS)
Tedd, B. L.; Strangeways, H. J.; Jones, T. B.
1985-11-01
Systematic ionospheric tilts (SITs) at midlatitudes and the diurnal variation of bearing error for different transmission paths are examined. An explanation of diurnal variations of bearing error based on the dependence of ionospheric tilt on solar zenith angle and plasma transport processes is presented. The effect of vertical ion drift and the momentum transfer of neutral winds is investigated. During the daytime the transmissions are low and photochemical processes control SITs; however, at night transmissions are at higher heights and spatial and temporal variations of plasma transport processes influence SITs. A HF ray tracing technique which uses a three-dimensional ionospheric model based on predictions to simulate SIT-induced bearing errors is described; poor correlation with experimental data is observed and the causes for this are studied. A second model based on measured vertical-sounder data is proposed. Model two is applicable for predicting bearing error for a range of transmission paths and correlates well with experimental data.
A surface code quantum computer in silicon
Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.
2015-01-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310
A surface code quantum computer in silicon.
Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L
2015-10-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.
Continuous performance measurement in flight systems. [sequential control model
NASA Technical Reports Server (NTRS)
Connelly, E. M.; Sloan, N. A.; Zeskind, R. M.
1975-01-01
The desired response of many man machine control systems can be formulated as a solution to an optimal control synthesis problem where the cost index is given and the resulting optimal trajectories correspond to the desired trajectories of the man machine system. Optimal control synthesis provides the reference criteria and the significance of error information required for performance measurement. The synthesis procedure described provides a continuous performance measure (CPM) which is independent of the mechanism generating the control action. Therefore, the technique provides a meaningful method for online evaluation of man's control capability in terms of total man machine performance.
Adaptive control of an exoskeleton robot with uncertainties on kinematics and dynamics.
Brahmi, Brahim; Saad, Maarouf; Ochoa-Luna, Cristobal; Rahman, Mohammad H
2017-07-01
In this paper, we propose a new adaptive control technique based on nonlinear sliding mode control (JSTDE) taking into account kinematics and dynamics uncertainties. This approach is applied to an exoskeleton robot with uncertain kinematics and dynamics. The adaptation design is based on Time Delay Estimation (TDE). The proposed strategy does not necessitate the well-defined dynamic and kinematic models of the system robot. The updated laws are designed using Lyapunov-function to solve the adaptation problem systematically, proving the close loop stability and ensuring the convergence asymptotically of the outputs tracking errors. Experiments results show the effectiveness and feasibility of JSTDE technique to deal with the variation of the unknown nonlinear dynamics and kinematics of the exoskeleton model.
Steffen, Michael; Curtis, Sean; Kirby, Robert M; Ryan, Jennifer K
2008-01-01
Streamline integration of fields produced by computational fluid mechanics simulations is a commonly used tool for the investigation and analysis of fluid flow phenomena. Integration is often accomplished through the application of ordinary differential equation (ODE) integrators--integrators whose error characteristics are predicated on the smoothness of the field through which the streamline is being integrated--smoothness which is not available at the inter-element level of finite volume and finite element data. Adaptive error control techniques are often used to ameliorate the challenge posed by inter-element discontinuities. As the root of the difficulties is the discontinuous nature of the data, we present a complementary approach of applying smoothness-enhancing accuracy-conserving filters to the data prior to streamline integration. We investigate whether such an approach applied to uniform quadrilateral discontinuous Galerkin (high-order finite volume) data can be used to augment current adaptive error control approaches. We discuss and demonstrate through numerical example the computational trade-offs exhibited when one applies such a strategy.
On the primary variable switching technique for simulating unsaturated-saturated flows
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Perrochet, P.
Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.
Executable assertions and flight software
NASA Technical Reports Server (NTRS)
Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.
1984-01-01
Executable assertions are used to test flight control software. The techniques used for testing flight software; however, are different from the techniques used to test other kinds of software. This is because of the redundant nature of flight software. An experimental setup for testing flight software using executable assertions is described. Techniques for writing and using executable assertions to test flight software are presented. The error detection capability of assertions is studied and many examples of assertions are given. The issues of placement and complexity of assertions and the language features to support efficient use of assertions are discussed.
Autonomous vehicle navigation utilizing fuzzy controls concepts for a next generation wheelchair.
Hansen, J D; Barrett, S F; Wright, C H G; Wilcox, M
2008-01-01
Three different positioning techniques were investigated to create an autonomous vehicle that could accurately navigate towards a goal: Global Positioning System (GPS), compass dead reckoning, and Ackerman steering. Each technique utilized a fuzzy logic controller that maneuvered a four-wheel car towards a target. The reliability and the accuracy of the navigation methods were investigated by modeling the algorithms in software and implementing them in hardware. To implement the techniques in hardware, positioning sensors were interfaced to a remote control car and a microprocessor. The microprocessor utilized the sensor measurements to orient the car with respect to the target. Next, a fuzzy logic control algorithm adjusted the front wheel steering angle to minimize the difference between the heading and bearing. After minimizing the heading error, the car maintained a straight steering angle along its path to the final destination. The results of this research can be used to develop applications that require precise navigation. The design techniques can also be implemented on alternate platforms such as a wheelchair to assist with autonomous navigation.
A-posteriori error estimation for second order mechanical systems
NASA Astrophysics Data System (ADS)
Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter
2012-06-01
One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.
Periodic Methods for Controlling a Satellite in Formation
2002-03-01
5 5. Clohessy - Wiltshire Reference Frame................................................................... 10 6...techniques to study relative position errors within a satellite cluster [19, 24]. The dynamics were based on Clohessy - Wiltshire equations with near...dynamics model by solving the time periodic, linearized system using Floquet Theory. More accurate than the Clohessy - Wiltshire solutions used in previous
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.; He, Jiali; White, Gregory S.
1997-01-01
Turbo coding using iterative SOVA decoding and M-ary differentially coherent or non-coherent modulation can provide an effective coding modulation solution: (1) Energy efficient with relatively simple SOVA decoding and small packet lengths, depending on BEP required; (2) Low number of decoding iterations required; and (3) Robustness in fading with channel interleaving.
Tetrazolium chloride as an indicator of pine pollen germinability
Stanton A. Cook; Robert G. Stanley
1960-01-01
Controlled pollination in forest tree breeding requires pollen of known germination capacity. Methods of determining pollen viability include germination in a hanging drop, in a moist atmosphere, on agar gel, or in a sugar solution (DUFFIELD, 1954; DILLON et al., 1957). Errors commonly arise in the application of these techniques because maximum...
Survey of Header Compression Techniques
NASA Technical Reports Server (NTRS)
Ishac, Joseph
2001-01-01
This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.
NASA Astrophysics Data System (ADS)
Husain, Riyasat; Ghodke, A. D.
2017-08-01
Estimation and correction of the optics errors in an operational storage ring is always vital to achieve the design performance. To achieve this task, the most suitable and widely used technique, called linear optics from closed orbit (LOCO) is used in almost all storage ring based synchrotron radiation sources. In this technique, based on the response matrix fit, errors in the quadrupole strengths, beam position monitor (BPM) gains, orbit corrector calibration factors etc. can be obtained. For correction of the optics, suitable changes in the quadrupole strengths can be applied through the driving currents of the quadrupole power supplies to achieve the desired optics. The LOCO code has been used at the Indus-2 storage ring for the first time. The estimation of linear beam optics errors and their correction to minimize the distortion of linear beam dynamical parameters by using the installed number of quadrupole power supplies is discussed. After the optics correction, the performance of the storage ring is improved in terms of better beam injection/accumulation, reduced beam loss during energy ramping, and improvement in beam lifetime. It is also useful in controlling the leakage in the orbit bump required for machine studies or for commissioning of new beamlines.
Space, time, and the third dimension (model error)
Moss, Marshall E.
1979-01-01
The space-time tradeoff of hydrologic data collection (the ability to substitute spatial coverage for temporal extension of records or vice versa) is controlled jointly by the statistical properties of the phenomena that are being measured and by the model that is used to meld the information sources. The control exerted on the space-time tradeoff by the model and its accompanying errors has seldom been studied explicitly. The technique, known as Network Analyses for Regional Information (NARI), permits such a study of the regional regression model that is used to relate streamflow parameters to the physical and climatic characteristics of the drainage basin.The NARI technique shows that model improvement is a viable and sometimes necessary means of improving regional data collection systems. Model improvement provides an immediate increase in the accuracy of regional parameter estimation and also increases the information potential of future data collection. Model improvement, which can only be measured in a statistical sense, cannot be quantitatively estimated prior to its achievement; thus an attempt to upgrade a particular model entails a certain degree of risk on the part of the hydrologist.
NASA Astrophysics Data System (ADS)
Peckerar, Martin C.; Marrian, Christie R.
1995-05-01
Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.
Performance Sensitivity Studies on the PIAA Implementation of the High-Contrast Imaging Testbed
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Lou, John; Shaklan, Stuart; Levine, Marie
2010-01-01
This slide presentation reviews the sensitivity studies on the Phase-Induced Amplitude Apodization (PIAA), or pupil mapping using the High-Contrast Imaging Testbed (HCIT). PIAA is a promising technique in high-dynamic range stellar coronagraph. This presentation reports on the investigation of the effects of the phase and rigid-body errors of various optics on the narrowband contrast performance of the PIAA/HCIT hybrid system. The results have shown that the 2-step wavefront control method utilizing 2-DMs is quite effective in compensating the effects of realistic phase and rigid-body errors of various optics
Robust control for a biaxial servo with time delay system based on adaptive tuning technique.
Chen, Tien-Chi; Yu, Chih-Hsien
2009-07-01
A robust control method for synchronizing a biaxial servo system motion is proposed in this paper. A new network based cross-coupled control and adaptive tuning techniques are used together to cancel out the skew error. The conventional fixed gain PID cross-coupled controller (CCC) is replaced with the adaptive cross-coupled controller (ACCC) in the proposed control scheme to maintain biaxial servo system synchronization motion. Adaptive-tuning PID (APID) position and velocity controllers provide the necessary control actions to maintain synchronization while following a variable command trajectory. A delay-time compensator (DTC) with an adaptive controller was augmented to set the time delay element, effectively moving it outside the closed loop, enhancing the stability of the robust controlled system. This scheme provides strong robustness with respect to uncertain dynamics and disturbances. The simulation and experimental results reveal that the proposed control structure adapts to a wide range of operating conditions and provides promising results under parameter variations and load changes.
DiStefano, Lindsay J; Padua, Darin A; DiStefano, Michael J; Marshall, Stephen W
2009-03-01
Anterior cruciate ligament (ACL) injury prevention programs show promising results with changing movement; however, little information exists regarding whether a program designed for an individual's movements may be effective or how baseline movements may affect outcomes. A program designed to change specific movements would be more effective than a "one-size-fits-all" program. Greatest improvement would be observed among individuals with the most baseline error. Subjects of different ages and sexes respond similarly. Randomized controlled trial; Level of evidence, 1. One hundred seventy-three youth soccer players from 27 teams were randomly assigned to a generalized or stratified program. Subjects were videotaped during jump-landing trials before and after the program and were assessed using the Landing Error Scoring System (LESS), which is a valid clinical movement analysis tool. A high LESS score indicates more errors. Generalized players performed the same exercises, while the stratified players performed exercises to correct their initial movement errors. Change scores were compared between groups of varying baseline errors, ages, sexes, and programs. Subjects with the highest baseline LESS score improved the most (95% CI, -3.4 to -2.0). High school subjects (95% CI, -1.7 to -0.98) improved their technique more than pre-high school subjects (95% CI, -1.0 to -0.4). There was no difference between the programs or sexes. Players with the greatest amount of movement errors experienced the most improvement. A program's effectiveness may be enhanced if this population is targeted.
Technological Advancements and Error Rates in Radiation Therapy Delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui
2011-11-15
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less
Pimkumwong, Narongrit; Wang, Ming-Shyan
2018-02-01
This paper presents another control method for the three-phase induction motor that is direct torque control based on constant voltage per frequency control technique. This method uses the magnitude of stator flux and torque errors to generate the stator voltage and phase angle references for controlling the induction motor by using constant voltage per frequency control method. Instead of hysteresis comparators and optimum switching table, the PI controllers and space vector modulation technique are used to reduce torque and stator-flux ripples and achieve constant switching frequency. Moreover, the coordinate transformations are not required. To implement this control method, a full-order observer is used to estimate stator flux and overcome the problems from drift and saturation in using pure integrator. The feedback gains are designed by simple manner to improve the convergence of stator flux estimation, especially in low speed range. Furthermore, the necessary conditions to maintain the stability for feedback gain design are introduced. The simulation and experimental results show accurate and stable operation of the introduced estimator and good dynamic response of the proposed control method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
How is the weather? Forecasting inpatient glycemic control
Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M
2017-01-01
Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125
NASA Astrophysics Data System (ADS)
Lin, Tsung-Chih
2010-12-01
In this paper, a novel direct adaptive interval type-2 fuzzy-neural tracking control equipped with sliding mode and Lyapunov synthesis approach is proposed to handle the training data corrupted by noise or rule uncertainties for nonlinear SISO nonlinear systems involving external disturbances. By employing adaptive fuzzy-neural control theory, the update laws will be derived for approximating the uncertain nonlinear dynamical system. In the meantime, the sliding mode control method and the Lyapunov stability criterion are incorporated into the adaptive fuzzy-neural control scheme such that the derived controller is robust with respect to unmodeled dynamics, external disturbance and approximation errors. In comparison with conventional methods, the advocated approach not only guarantees closed-loop stability but also the output tracking error of the overall system will converge to zero asymptotically without prior knowledge on the upper bound of the lumped uncertainty. Furthermore, chattering effect of the control input will be substantially reduced by the proposed technique. To illustrate the performance of the proposed method, finally simulation example will be given.
The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center
NASA Astrophysics Data System (ADS)
Tseng, Pai-Chung; Chen, Shen-Len
The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
Waffle mode error in the AEOS adaptive optics point-spread function
NASA Astrophysics Data System (ADS)
Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.
2003-02-01
Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.
A dynamic system matching technique for improving the accuracy of MEMS gyroscopes
NASA Astrophysics Data System (ADS)
Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.
2014-12-01
A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT systems.
A dynamic system matching technique for improving the accuracy of MEMS gyroscopes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stubberud, Peter A., E-mail: stubber@ee.unlv.edu; Stubberud, Stephen C., E-mail: scstubberud@ieee.org; Stubberud, Allen R., E-mail: stubberud@att.net
A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This canmore » be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT systems.« less
Signal-chip microcomputer control system for a diffraction grating ruling engine
NASA Astrophysics Data System (ADS)
Wang, Xiaolin; Zhang, Yuhua; Yang, Houmin; Guo, Du
1998-08-01
A control system with a chip of 8031 single-chip microcomputer as its nucleus for a diffraction grating ruling engine has been developed, its hardware and software are presented in this paper. A series of techniques such as program-controlled amplifier and interference fringes subdivision as well as motor velocity step governing are adopted to improve the control accuracy. With this control system, 8 kinds of gratings of different spacings can be ruled, the positioning precision of the diffraction grating ruling engine (sigma) equals 3.6 nm, and the maximum positioning error is less than 14.6 nm.
APA national audit of pediatric opioid infusions.
Morton, Neil S; Errera, Agata
2010-02-01
A prospective audit of neonates, infants, and children receiving opioid infusion techniques managed by pediatric acute pain teams from across the United Kingdom and Eire was undertaken over a period of 17 months. The aim was to determine the incidence, nature, and severity of serious clinical incidents (SCIs) associated with the techniques of continuous opioid infusion, patient-controlled analgesia, and nurse-controlled analgesia in patients aged 0-18. The audit was funded by the Association of Paediatric Anaesthetists (APA) and performed by the acute pain services of 18 centers throughout the United Kingdom. Data were submitted weekly via a web-based return form designed by the Document Capture Company that documented data on all patients receiving opioid infusions and any SCIs. Eight categories of SCI were identified in advance, and the reported SCIs were graded in terms of severity (Grade 1 (death/permanent harm); Grade 2 (harm but full recovery and resulting in termination of the technique or needing significant intervention); Grade 3 (potential but no actual harm). Data were collected over a period of 17 months (25/06/07-25/11/08) and stored on a secure server for analysis. Forty-six SCIs were reported in 10 726 opioid infusion techniques. One Grade 1 incident (1 : 10,726) of cardiac arrest occurred and was associated with aspiration pneumonitis and the underlying neurological condition, neurocutaneous melanosis. Twenty-eight Grade 2 incidents (1 : 383) were reported of which half were respiratory depression. The seventeen Grade 3 incidents (1 : 631) were all drug errors because of programming or prescribing errors and were all reported by one center. The overall incidence of 1 : 10,000 of serious harm with opioid infusion techniques in children is comparable to the risks with pediatric epidural infusions and central blocks identified by two recent UK national audits (1,2). Avoidable factors were identified including prescription and pump programming errors, use of concurrent sedatives or opioids by different routes and overgenerous dosing in infants. Early respiratory depression in patients with specific risk factors, such as young age, neurodevelopmental, respiratory, or cardiac comorbidities, who are receiving nurse-controlled analgesia or continuous opioid infusion suggests that closer monitoring for at least 2 h is needed for these cases. As a result of this audit, we can provide parents with better information on relative risks to help the process of informed consent.
Rapid fabrication of miniature lens arrays by four-axis single point diamond machining
McCall, Brian; Tkaczyk, Tomasz S.
2013-01-01
A novel method for fabricating lens arrays and other non-rotationally symmetric free-form optics is presented. This is a diamond machining technique using 4 controlled axes of motion – X, Y, Z, and C. As in 3-axis diamond micro-milling, a diamond ball endmill is mounted to the work spindle of a 4-axis ultra-precision computer numerical control (CNC) machine. Unlike 3-axis micro-milling, the C-axis is used to hold the cutting edge of the tool in contact with the lens surface for the entire cut. This allows the feed rates to be doubled compared to the current state of the art of micro-milling while producing an optically smooth surface with very low surface form error and exceptionally low radius error. PMID:23481813
A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Finelli, George B.
1987-01-01
Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.
Odaira, Chikayuki; Kobayashi, Takuya; Kondo, Hisatomo
2016-01-01
An impression technique called optical impression using intraoral scanner has attracted attention in digital dentistry. This study aimed to evaluate the accuracy of the optical impression, comparing a virtual model reproduced by an intraoral scanner to a working cast made by conventional silicone impression technique. Two implants were placed on a master model. Working casts made of plaster were fabricated from the master model by silicone impression. The distance between the ball abutments and the angulation between the healing abutments of 5 mm and 7 mm height at master model were measured using Computer Numerical Control Coordinate Measuring Machine (CNCCMM) as control. Working casts were then measured using CNCCMM, and virtual models via stereo lithography data of master model were measured by a three-dimensional analyzing software. The distance between ball abutments of the master model was 9634.9 ± 1.2 μm. The mean values of trueness of the Lava COS and working casts were 64.5 μm and 22.5 μm, respectively, greater than that of control. The mean of precision values of the Lava COS and working casts were 15.6 μm and 13.5 μm, respectively. In the case of a 5-mm-height healing abutment, mean angulation error of the Lava COS was greater than that of the working cast, resulting in significant differences in trueness and precision. However, in the case of a 7-mm-height abutment, mean angulation errors of the Lava COS and the working cast were not significantly different in trueness and precision. Therefore, distance errors of the optical impression were slightly greater than those of conventional impression. Moreover, the trueness and precision of angulation error could be improved in the optical impression using longer healing abutments. In the near future, the development of information technology could enable improvement in the accuracy of the optical impression with intraoral scanners. PMID:27706225
NASA Astrophysics Data System (ADS)
Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.
2018-02-01
This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.
Cecconi, Maurizio; Rhodes, Andrew; Poloniecki, Jan; Della Rocca, Giorgio; Grounds, R Michael
2009-01-01
Bland-Altman analysis is used for assessing agreement between two measurements of the same clinical variable. In the field of cardiac output monitoring, its results, in terms of bias and limits of agreement, are often difficult to interpret, leading clinicians to use a cutoff of 30% in the percentage error in order to decide whether a new technique may be considered a good alternative. This percentage error of +/- 30% arises from the assumption that the commonly used reference technique, intermittent thermodilution, has a precision of +/- 20% or less. The combination of two precisions of +/- 20% equates to a total error of +/- 28.3%, which is commonly rounded up to +/- 30%. Thus, finding a percentage error of less than +/- 30% should equate to the new tested technique having an error similar to the reference, which therefore should be acceptable. In a worked example in this paper, we discuss the limitations of this approach, in particular in regard to the situation in which the reference technique may be either more or less precise than would normally be expected. This can lead to inappropriate conclusions being drawn from data acquired in validation studies of new monitoring technologies. We conclude that it is not acceptable to present comparison studies quoting percentage error as an acceptability criteria without reporting the precision of the reference technique.
Spatial Assessment of Model Errors from Four Regression Techniques
Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove
2005-01-01
Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...
Technique for positioning hologram for balancing large data capacity with fast readout
NASA Astrophysics Data System (ADS)
Shimada, Ken-ichi; Hosaka, Makoto; Yamazaki, Kazuyoshi; Onoe, Shinsuke; Ide, Tatsuro
2017-09-01
The technical difficulty of balancing large data capacity with a high data transfer rate in holographic data storage systems (HDSSs) is significantly high because of tight tolerances for physical perturbation. From a system margin perspective in terabyte-class HDSSs, the positioning error of a holographic disc should be within about 10 µm to ensure high readout quality. Furthermore, fine control of the positioning should be accomplished within a time frame of about 10 ms for a high data transfer rate of the Gbps class, while a conventional method based on servo control of spindle or sled motors can rarely satisfy the requirement. In this study, a new compensation method for the effect of positioning error, which precisely controls the positioning of a Nyquist aperture instead of a holographic disc, has been developed. The method relaxes the markedly low positional tolerance of a holographic disc. Moreover, owing to the markedly light weight of the aperture, positioning control within the required time frame becomes feasible.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case
Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique
2017-01-01
This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144
Design, test, and evaluation of three active flutter suppression controllers
NASA Technical Reports Server (NTRS)
Adams, William M., Jr.; Christhilf, David M.; Waszak, Martin R.; Mukhopadhyay, Vivek; Srinathkumar, S.
1992-01-01
Three control law design techniques for flutter suppression are presented. Each technique uses multiple control surfaces and/or sensors. The first method uses traditional tools (such as pole/zero loci and Nyquist diagrams) for producing a controller that has minimal complexity and which is sufficiently robust to handle plant uncertainty. The second procedure uses linear combinations of several accelerometer signals and dynamic compensation to synthesize the model rate of the critical mode for feedback to the distributed control surfaces. The third technique starts with a minimum-energy linear quadratic Gaussian controller, iteratively modifies intensity matrices corresponding to input and output noise, and applies controller order reduction to achieve a low-order, robust controller. The resulting designs were implemented digitally and tested subsonically on the active flexible wing wind-tunnel model in the Langley Transonic Dynamics Tunnel. Only the traditional pole/zero loci design was sufficiently robust to errors in the nominal plant to successfully suppress flutter during the test. The traditional pole/zero loci design provided simultaneous suppression of symmetric and antisymmetric flutter with a 24-percent increase in attainable dynamic pressure. Posttest analyses are shown which illustrate the problems encountered with the other laws.
Direct discretization of planar div-curl problems
NASA Technical Reports Server (NTRS)
Nicolaides, R. A.
1989-01-01
A control volume method is proposed for planar div-curl systems. The method is independent of potential and least squares formulations, and works directly with the div-curl system. The novelty of the technique lies in its use of a single local vector field component and two control volumes rather than the other way around. A discrete vector field theory comes quite naturally from this idea and is developed. Error estimates are proved for the method, and other ramifications investigated.
Reducing Wrong Patient Selection Errors: Exploring the Design Space of User Interface Techniques
Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben
2014-01-01
Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients’ identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed. PMID:25954415
Reducing wrong patient selection errors: exploring the design space of user interface techniques.
Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben
2014-01-01
Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.
NASA Technical Reports Server (NTRS)
Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.
1992-01-01
The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.
Quaternion error-based optimal control applied to pinpoint landing
NASA Astrophysics Data System (ADS)
Ghiglino, Pablo
Accurate control techniques for pinpoint planetary landing - i.e., the goal of achieving landing errors in the order of 100m for unmanned missions - is a complex problem that have been tackled in different ways in the available literature. Among other challenges, this kind of control is also affected by the well known trade-off in UAV control that for complex underlying models the control is sub-optimal, while optimal control is applied to simplifed models. The goal of this research has been the development new control algorithms that would be able to tackle these challenges and the result are two novel optimal control algorithms namely: OQTAL and HEX2OQTAL. These controllers share three key properties that are thoroughly proven and shown in this thesis; stability, accuracy and adaptability. Stability is rigorously demonstrated for both controllers. Accuracy is shown in results of comparing these novel controllers with other industry standard algorithms in several different scenarios: there is a gain in accuracy of at least 15% for each controller, and in many cases much more than that. A new tuning algorithm based on swarm heuristics optimisation was developed as well as part of this research in order to tune in an online manner the standard Proportional-Integral-Derivative (PID) controllers used for benchmarking. Finally, adaptability of these controllers can be seen as a combination of four elements: mathematical model extensibility, cost matrices tuning, reduced computation time required and finally no prior knowledge of the navigation or guidance strategies needed. Further simulations in real planetary landing trajectories has shown that these controllers have the capacity of achieving landing errors in the order of pinpoint landing requirements, making them not only very precise UAV controllers, but also potential candidates for pinpoint landing unmanned missions.
Design issues for a reinforcement-based self-learning fuzzy controller
NASA Technical Reports Server (NTRS)
Yen, John; Wang, Haojin; Dauherity, Walter
1993-01-01
Fuzzy logic controllers have some often cited advantages over conventional techniques such as PID control: easy implementation, its accommodation to natural language, the ability to cover wider range of operating conditions and others. One major obstacle that hinders its broader application is the lack of a systematic way to develop and modify its rules and as result the creation and modification of fuzzy rules often depends on try-error or pure experimentation. One of the proposed approaches to address this issue is self-learning fuzzy logic controllers (SFLC) that use reinforcement learning techniques to learn the desirability of states and to adjust the consequent part of fuzzy control rules accordingly. Due to the different dynamics of the controlled processes, the performance of self-learning fuzzy controller is highly contingent on the design. The design issue has not received sufficient attention. The issues related to the design of a SFLC for the application to chemical process are discussed and its performance is compared with that of PID and self-tuning fuzzy logic controller.
A framework for software fault tolerance in real-time systems
NASA Technical Reports Server (NTRS)
Anderson, T.; Knight, J. C.
1983-01-01
A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.
Ammari, Maha Al; Sultana, Khizra; Yunus, Faisal; Ghobain, Mohammed Al; Halwan, Shatha M. Al
2016-01-01
Objectives: To assess the proportion of critical errors committed while demonstrating the inhaler technique in hospitalized patients diagnosed with asthma and chronic obstructive pulmonary disease (COPD). Methods: This cross-sectional observational study was conducted in 47 asthmatic and COPD patients using inhaler devices. The study took place at King Abdulaziz Medical City, Riyadh, Saudi Arabia between September and December 2013. Two pharmacists independently assessed inhaler technique with a validated checklist. Results: Seventy percent of patients made at least one critical error while demonstrating their inhaler technique, and the mean number of critical errors per patient was 1.6. Most patients used metered dose inhaler (MDI), and 73% of MDI users and 92% of dry powder inhaler users committed at least one critical error. Conclusion: Inhaler technique in hospitalized Saudi patients was inadequate. Health care professionals should understand the importance of reassessing and educating patients on a regular basis for inhaler technique, recommend the use of a spacer when needed, and regularly assess and update their own inhaler technique skills. PMID:27146622
Dichroic beamsplitter for high energy laser diagnostics
LaFortune, Kai N [Livermore, CA; Hurd, Randall [Tracy, CA; Fochs, Scott N [Livermore, CA; Rotter, Mark D [San Ramon, CA; Hackel, Lloyd [Livermore, CA
2011-08-30
Wavefront control techniques are provided for the alignment and performance optimization of optical devices. A Shack-Hartmann wavefront sensor can be used to measure the wavefront distortion and a control system generates feedback error signal to optics inside the device to correct the wavefront. The system can be calibrated with a low-average-power probe laser. An optical element is provided to couple the optical device to a diagnostic/control package in a way that optimizes both the output power of the optical device and the coupling of the probe light into the diagnostics.
Intelligent voltage control strategy for three-phase UPS inverters with output LC filter
NASA Astrophysics Data System (ADS)
Jung, J. W.; Leu, V. Q.; Dang, D. Q.; Do, T. D.; Mwasilu, F.; Choi, H. H.
2015-08-01
This paper presents a supervisory fuzzy neural network control (SFNNC) method for a three-phase inverter of uninterruptible power supplies (UPSs). The proposed voltage controller is comprised of a fuzzy neural network control (FNNC) term and a supervisory control term. The FNNC term is deliberately employed to estimate the uncertain terms, and the supervisory control term is designed based on the sliding mode technique to stabilise the system dynamic errors. To improve the learning capability, the FNNC term incorporates an online parameter training methodology, using the gradient descent method and Lyapunov stability theory. Besides, a linear load current observer that estimates the load currents is used to exclude the load current sensors. The proposed SFNN controller and the observer are robust to the filter inductance variations, and their stability analyses are described in detail. The experimental results obtained on a prototype UPS test bed with a TMS320F28335 DSP are presented to validate the feasibility of the proposed scheme. Verification results demonstrate that the proposed control strategy can achieve smaller steady-state error and lower total harmonic distortion when subjected to nonlinear or unbalanced loads compared to the conventional sliding mode control method.
NASA Astrophysics Data System (ADS)
Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy
2015-03-01
Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
Applications of optical measurement technology in pollution gas monitoring at thermal power plants
NASA Astrophysics Data System (ADS)
Wang, Jian; Yu, Dahai; Ye, Huajun; Yang, Jianhu; Ke, Liang; Han, Shuanglai; Gu, Haitao; Chen, Yingbin
2011-11-01
This paper presents the work of using advanced optical measurement techniques to implement stack gas emission monitoring and process control. A system is designed to conduct online measurement of SO2/NOX and mercury emission from stacks and slipping NH3 of de-nitrification process. The system is consisted of SO2/NOX monitoring subsystem, mercury monitoring subsystem, and NH3 monitoring subsystem. The SO2/NOX monitoring subsystem is developed based on the ultraviolet differential optical absorption spectroscopy (UV-DOAS) technique. By using this technique, a linearity error less than +/-1% F.S. is achieved, and the measurement errors resulting from optical path contamination and light fluctuation are removed. Moreover, this subsystem employs in situ extraction and hot-wet line sampling technique to significantly reduce SO2 loss due to condensation and protect gas pipeline from corrosion. The mercury monitoring subsystem is used to measure the concentration of element mercury (Hg0), oxidized mercury (Hg2+), and total gaseous mercury (HgT) in the flue gas exhaust. The measurement of Hg with a low detection limit (0.1μg/m3) and a high sensitivity is realized by using cold vapor atom fluorescence spectroscopy (CVAFS) technique. This subsystem is also equipped with an inertial separation type sampling technique to prevent gas pipeline from being clogged and to reduce speciation mercury measurement error. The NH3 monitoring subsystem is developed to measure the concentration of slipping NH3 and then to help improving the efficiency of de-nitrification. The NH3 concentration as low as 0.1ppm is able to be measured by using the off-axis integrated cavity output spectroscopy (ICOS) and the tunable diode laser absorption spectroscopy (TDLAS) techniques. The problem of trace NH3 sampling loss is solved by applying heating the gas pipelines when the measurement is running.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E., E-mail: koepferl@usm.lmu.de
We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densitiesmore » can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; −13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ {sup 2} values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; −7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect techniques.« less
Passive Sensor Integration for Vehicle Self-Localization in Urban Traffic Environment †
Gu, Yanlei; Hsu, Li-Ta; Kamijo, Shunsuke
2015-01-01
This research proposes an accurate vehicular positioning system which can achieve lane-level performance in urban canyons. Multiple passive sensors, which include Global Navigation Satellite System (GNSS) receivers, onboard cameras and inertial sensors, are integrated in the proposed system. As the main source for the localization, the GNSS technique suffers from Non-Line-Of-Sight (NLOS) propagation and multipath effects in urban canyons. This paper proposes to employ a novel GNSS positioning technique in the integration. The employed GNSS technique reduces the multipath and NLOS effects by using the 3D building map. In addition, the inertial sensor can describe the vehicle motion, but has a drift problem as time increases. This paper develops vision-based lane detection, which is firstly used for controlling the drift of the inertial sensor. Moreover, the lane keeping and changing behaviors are extracted from the lane detection function, and further reduce the lateral positioning error in the proposed localization system. We evaluate the integrated localization system in the challenging city urban scenario. The experiments demonstrate the proposed method has sub-meter accuracy with respect to mean positioning error. PMID:26633420
Adaptive x-ray optics development at AOA-Xinetics
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Cavaco, Jeff L.; Brooks, Audrey D.; Ezzo, Kevin; Pearson, David D.; Wellman, John A.
2013-05-01
Grazing-incidence optics for X-ray applications require extremely smooth surfaces with precise mirror figures to provide well focused beams and small image spot sizes for astronomical telescopes and laboratory test facilities. The required precision has traditionally been achieved by time-consuming grinding and polishing of thick substrates with frequent pauses for precise metrology to check the mirror figure. More recently, substrates with high quality surface finish and figures have become available at reasonable cost, and techniques have been developed to mechanically adjust the figure of these traditionally polished substrates for ground-based applications. The beam-bending techniques currently in use are mechanically complex, however, with little control over mid-spatial frequency errors. AOA-Xinetics has been developing been developing techniques for shaping grazing incidence optics with surface-normal and surface-parallel electrostrictive Lead magnesium niobate (PMN) actuators bonded to mirror substrates for several years. These actuators are highly reliable; exhibit little to no hysteresis, aging or creep; and can be closely spaced to correct low and mid-spatial frequency errors in a compact package. In this paper we discuss recent development of adaptive x-ray optics at AOA-Xinetics.
Adaptive x-ray optics development at AOA-Xinetics
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Pearson, David D.; Cavaco, Jeffrey L.; Plinta, Audrey D.; Wellman, John A.
2012-10-01
Grazing-incidence optics for X-ray applications require extremely smooth surfaces with precise mirror figures to provide well focused beams and small image spot sizes for astronomical telescopes and laboratory test facilities. The required precision has traditionally been achieved by time-consuming grinding and polishing of thick substrates with frequent pauses for precise metrology to check the mirror figure. More recently, substrates with high quality surface finish and figures have become available at reasonable cost, and techniques have been developed to mechanically adjust the figure of these traditionally polished substrates for ground-based applications. The beam-bending techniques currently in use are mechanically complex, however, with little control over mid-spatial frequency errors. AOA-Xinetics has been developing been developing techniques for shaping grazing incidence optics with surface-normal and surface-parallel electrostrictive Lead magnesium niobate (PMN) actuators bonded to mirror substrates for several years. These actuators are highly reliable; exhibit little to no hysteresis, aging or creep; and can be closely spaced to correct low and mid-spatial frequency errors in a compact package. In this paper we discuss recent development of adaptive x-ray optics at AOAXinetics.
Symbolic Analysis of Concurrent Programs with Polymorphism
NASA Technical Reports Server (NTRS)
Rungta, Neha Shyam
2010-01-01
The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.
Probabilistic/Fracture-Mechanics Model For Service Life
NASA Technical Reports Server (NTRS)
Watkins, T., Jr.; Annis, C. G., Jr.
1991-01-01
Computer program makes probabilistic estimates of lifetime of engine and components thereof. Developed to fill need for more accurate life-assessment technique that avoids errors in estimated lives and provides for statistical assessment of levels of risk created by engineering decisions in designing system. Implements mathematical model combining techniques of statistics, fatigue, fracture mechanics, nondestructive analysis, life-cycle cost analysis, and management of engine parts. Used to investigate effects of such engine-component life-controlling parameters as return-to-service intervals, stresses, capabilities for nondestructive evaluation, and qualities of materials.
NASA Astrophysics Data System (ADS)
Beavis, Andrew W.; Ward, James W.
2014-03-01
Purpose: In recent years there has been interest in using Computer Simulation within Medical training. The VERT (Virtual Environment for Radiotherapy Training) system is a Flight Simulator for Radiation Oncology professionals, wherein fundamental concepts, techniques and problematic scenarios can be safely investigated. Methods: The system provides detailed simulations of several Linacs and the ability to display DICOM treatment plans. Patients can be mis-positioned with 'set-up errors' which can be explored visually, dosimetrically and using IGRT. Similarly, a variety of Linac calibration and configuration parameters can be altered manually or randomly via controlled errors in the simulated 3D Linac and its component parts. The implication of these can be investigated by following through a treatment scenario or using QC devices available within a Physics software module. Results: One resultant exercise is a systematic mis-calibration of 'lateral laser height' by 2mm. The offset in patient alignment is easily identified using IGRT and once corrected by reference to the 'in-room monitor'. The dosimetric implication is demonstrated to be 0.4% by setting a dosimetry phantom by the lasers (and ignoring TSD information). Finally, the need for recalibration can be shown by the Laser Alignment Phantom or by reference to the front pointer. Conclusions: The VERT system provides a realistic environment for training and enhancing understanding of radiotherapy concepts and techniques. Linac error conditions can be explored in this context and valuable experience gained in a controlled manner in a compressed period of time.
On decentralized adaptive full-order sliding mode control of multiple UAVs.
Xiang, Xianbo; Liu, Chao; Su, Housheng; Zhang, Qin
2017-11-01
In this study, a novel decentralized adaptive full-order sliding mode control framework is proposed for the robust synchronized formation motion of multiple unmanned aerial vehicles (UAVs) subject to system uncertainty. First, a full-order sliding mode surface in a decentralized manner is designed to incorporate both the individual position tracking error and the synchronized formation error while the UAV group is engaged in building a certain desired geometric pattern in three dimensional space. Second, a decentralized virtual plant controller is constructed which allows the embedded low-pass filter to attain the chattering free property of the sliding mode controller. In addition, robust adaptive technique is integrated in the decentralized chattering free sliding control design in order to handle unknown bounded uncertainties, without requirements for assuming a priori knowledge of bounds on the system uncertainties as stated in conventional chattering free control methods. Subsequently, system robustness as well as stability of the decentralized full-order sliding mode control of multiple UAVs is synthesized. Numerical simulation results illustrate the effectiveness of the proposed control framework to achieve robust 3D formation flight of the multi-UAV system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
One way Doppler extractor. Volume 1: Vernier technique
NASA Technical Reports Server (NTRS)
Blasco, R. W.; Klein, S.; Nossen, E. J.; Starner, E. R.; Yanosov, J. A.
1974-01-01
A feasibility analysis, trade-offs, and implementation for a One Way Doppler Extraction system are discussed. A Doppler error analysis shows that quantization error is a primary source of Doppler measurement error. Several competing extraction techniques are compared and a Vernier technique is developed which obtains high Doppler resolution with low speed logic. Parameter trade-offs and sensitivities for the Vernier technique are analyzed, leading to a hardware design configuration. A detailed design, operation, and performance evaluation of the resulting breadboard model is presented which verifies the theoretical performance predictions. Performance tests have verified that the breadboard is capable of extracting Doppler, on an S-band signal, to an accuracy of less than 0.02 Hertz for a one second averaging period. This corresponds to a range rate error of no more than 3 millimeters per second.
NASA Astrophysics Data System (ADS)
GonzáLez, Pablo J.; FernáNdez, José
2011-10-01
Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.
NASA Technical Reports Server (NTRS)
Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.
1996-01-01
Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.
Measuring Memory and Attention to Preview in Motion.
Jagacinski, Richard J; Hammond, Gordon M; Rizzi, Emanuele
2017-08-01
Objective Use perceptual-motor responses to perturbations to reveal the spatio-temporal detail of memory for the recent past and attention to preview when participants track a winding roadway. Background Memory of the recently passed roadway can be inferred from feedback control models of the participants' manual movement patterns. Similarly, attention to preview of the upcoming roadway can be inferred from feedforward control models of manual movement patterns. Method Perturbation techniques were used to measure these memory and attention functions. Results In a laboratory tracking task, the bandwidth of lateral roadway deviations was found to primarily influence memory for the past roadway rather than attention to preview. A secondary auditory/verbal/vocal memory task resulted in higher velocity error and acceleration error in the tracking task but did not affect attention to preview. Attention to preview was affected by the frequency pattern of sinusoidal perturbations of the roadway. Conclusion Perturbation techniques permit measurement of the spatio-temporal span of memory and attention to preview that affect tracking a winding roadway. They also provide new ways to explore goal-directed forgetting and spatially distributed attention in the context of movement. More generally, these techniques provide sensitive measures of individual differences in cognitive aspects of action. Application Models of driving behavior and assessment of driving skill may benefit from more detailed spatio-temporal measurement of attention to preview.
NASA Astrophysics Data System (ADS)
Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.
The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2010-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will make use of distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. Research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique and validating this technique through simulation and flight test of the X-48B aircraft. The X-48B aircraft is an 8.5 percent-scale hybrid wing body aircraft demonstrator designed by The Boeing Company (Chicago, Illinois, USA), built by Cranfield Aerospace Limited (Cranfield, Bedford, United Kingdom) and flight tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California, USA). Based on data from flight test maneuvers performed at Dryden Flight Research Center, aerodynamic parameter estimation was performed using linear regression and output error techniques. An input design technique that uses temporal separation for de-correlation of control surfaces is proposed, and simulation and flight test results are compared with the aerodynamic database. This paper will present a method to determine individual control surface aerodynamic derivatives.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Hardware implementation of Lorenz circuit systems for secure chaotic communication applications.
Chen, Hsin-Chieh; Liau, Ben-Yi; Hou, Yi-You
2013-02-18
This paper presents the synchronization between the master and slave Lorenz chaotic systems by slide mode controller (SMC)-based technique. A proportional-integral (PI) switching surface is proposed to simplify the task of assigning the performance of the closed-loop error system in sliding mode. Then, extending the concept of equivalent control and using some basic electronic components, a secure communication system is constructed. Experimental results show the feasibility of synchronizing two Lorenz circuits via the proposed SMC.
Simulation techniques for estimating error in the classification of normal patterns
NASA Technical Reports Server (NTRS)
Whitsitt, S. J.; Landgrebe, D. A.
1974-01-01
Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.
ERIC Educational Resources Information Center
Alamri, Bushra; Fawzi, Hala Hassan
2016-01-01
Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…
NASA Astrophysics Data System (ADS)
Zhang, F. H.; Wang, S. F.; An, C. H.; Wang, J.; Xu, Q.
2017-06-01
Large-aperture potassium dihydrogen phosphate (KDP) crystals are widely used in the laser path of inertial confinement fusion (ICF) systems. The most common method of manufacturing half-meter KDP crystals is ultra-precision fly cutting. When processing KDP crystals by ultra-precision fly cutting, the dynamic characteristics of the fly cutting machine and fluctuations in the fly cutting environment are translated into surface errors at different spatial frequency bands. These machining errors should be suppressed effectively to guarantee that KDP crystals meet the full-band machining accuracy specified in the evaluation index. In this study, the anisotropic machinability of KDP crystals and the causes of typical surface errors in ultra-precision fly cutting of the material are investigated. The structures of the fly cutting machine and existing processing parameters are optimized to improve the machined surface quality. The findings are theoretically and practically important in the development of high-energy laser systems in China.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, Charles M., E-mail: cable@wfubmc.edu; Bright, Megan; Frizzell, Bart
Purpose: Statistical process control (SPC) is a quality control method used to ensure that a process is well controlled and operates with little variation. This study determined whether SPC was a viable technique for evaluating the proper operation of a high-dose-rate (HDR) brachytherapy treatment delivery system. Methods and Materials: A surrogate prostate patient was developed using Vyse ordnance gelatin. A total of 10 metal oxide semiconductor field-effect transistors (MOSFETs) were placed from prostate base to apex. Computed tomography guidance was used to accurately position the first detector in each train at the base. The plan consisted of 12 needles withmore » 129 dwell positions delivering a prescribed peripheral dose of 200 cGy. Sixteen accurate treatment trials were delivered as planned. Subsequently, a number of treatments were delivered with errors introduced, including wrong patient, wrong source calibration, wrong connection sequence, single needle displaced inferiorly 5 mm, and entire implant displaced 2 mm and 4 mm inferiorly. Two process behavior charts (PBC), an individual and a moving range chart, were developed for each dosimeter location. Results: There were 4 false positives resulting from 160 measurements from 16 accurately delivered treatments. For the inaccurately delivered treatments, the PBC indicated that measurements made at the periphery and apex (regions of high-dose gradient) were much more sensitive to treatment delivery errors. All errors introduced were correctly identified by either the individual or the moving range PBC in the apex region. Measurements at the urethra and base were less sensitive to errors. Conclusions: SPC is a viable method for assessing the quality of HDR treatment delivery. Further development is necessary to determine the most effective dose sampling, to ensure reproducible evaluation of treatment delivery accuracy.« less
NASA Astrophysics Data System (ADS)
Zhu, Jing; Wang, Xingshu; Wang, Jun; Dai, Dongkai; Xiong, Hao
2016-10-01
Former studies have proved that the attitude error in a single-axis rotation INS/GPS integrated system tracks the high frequency component of the deflections of the vertical (DOV) with a fixed delay and tracking error. This paper analyses the influence of the nominal process noise covariance matrix Q on the tracking error as well as the response delay, and proposed a Q-adjusting technique to obtain the attitude error which can track the DOV better. Simulation results show that different settings of Q lead to different response delay and tracking error; there exists optimal Q which leads to a minimum tracking error and a comparatively short response delay; for systems with different accuracy, different Q-adjusting strategy should be adopted. In this way, the DOV estimation accuracy of using the attitude error as the observation can be improved. According to the simulation results, the DOV estimation accuracy after using the Q-adjusting technique is improved by approximate 23% and 33% respectively compared to that of the Earth Model EGM2008 and the direct attitude difference method.
Use of graph theory measures to identify errors in record linkage.
Randall, Sean M; Boyd, James H; Ferrante, Anna M; Bauer, Jacqueline K; Semmens, James B
2014-07-01
Ensuring high linkage quality is important in many record linkage applications. Current methods for ensuring quality are manual and resource intensive. This paper seeks to determine the effectiveness of graph theory techniques in identifying record linkage errors. A range of graph theory techniques was applied to two linked datasets, with known truth sets. The ability of graph theory techniques to identify groups containing errors was compared to a widely used threshold setting technique. This methodology shows promise; however, further investigations into graph theory techniques are required. The development of more efficient and effective methods of improving linkage quality will result in higher quality datasets that can be delivered to researchers in shorter timeframes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Considerations for pattern placement error correction toward 5nm node
NASA Astrophysics Data System (ADS)
Yaegashi, Hidetami; Oyama, Kenichi; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Koike, Kyohei; Maslow, Mark John; Timoshkov, Vadim; Kiers, Ton; Di Lorenzo, Paolo; Fonseca, Carlos
2017-03-01
Multi-patterning has been adopted widely in high volume manufacturing as 193 immersion extension, and it becomes realistic solution of nano-order scaling. In fact, it must be key technology on single directional (1D) layout design [1] for logic devise and it becomes a major option for further scaling technique in SAQP. The requirement for patterning fidelity control is getting savior more and more, stochastic fluctuation as well as LER (Line edge roughness) has to be micro-scopic observation aria. In our previous work, such atomic order controllability was viable in complemented technique with etching and deposition [2]. Overlay issue form major potion in yield management, therefore, entire solution is needed keenly including alignment accuracy on scanner and detectability on overlay measurement instruments. As EPE (Edge placement error) was defined as the gap between design pattern and contouring of actual pattern edge, pattern registration in single process level must be considerable. The complementary patterning to fabricate 1D layout actually mitigates any process restrictions, however, multiple process step, symbolized as LELE with 193-i, is burden to yield management and affordability. Recent progress of EUV technology is remarkable, and it is major potential solution for such complicated technical issues. EUV has robust resolution limit and it must be definitely strong scaling driver for process simplification. On the other hand, its stochastic variation such like shot noise due to light source power must be resolved with any additional complemented technique. In this work, we examined the nano-order CD and profile control on EUV resist pattern and would introduce excellent accomplishments.
Behavioural and neural basis of anomalous motor learning in children with autism.
Marko, Mollie K; Crocetti, Deana; Hulst, Thomas; Donchin, Opher; Shadmehr, Reza; Mostofsky, Stewart H
2015-03-01
Autism spectrum disorder is a developmental disorder characterized by deficits in social and communication skills and repetitive and stereotyped interests and behaviours. Although not part of the diagnostic criteria, individuals with autism experience a host of motor impairments, potentially due to abnormalities in how they learn motor control throughout development. Here, we used behavioural techniques to quantify motor learning in autism spectrum disorder, and structural brain imaging to investigate the neural basis of that learning in the cerebellum. Twenty children with autism spectrum disorder and 20 typically developing control subjects, aged 8-12, made reaching movements while holding the handle of a robotic manipulandum. In random trials the reach was perturbed, resulting in errors that were sensed through vision and proprioception. The brain learned from these errors and altered the motor commands on the subsequent reach. We measured learning from error as a function of the sensory modality of that error, and found that children with autism spectrum disorder outperformed typically developing children when learning from errors that were sensed through proprioception, but underperformed typically developing children when learning from errors that were sensed through vision. Previous work had shown that this learning depends on the integrity of a region in the anterior cerebellum. Here we found that the anterior cerebellum, extending into lobule VI, and parts of lobule VIII were smaller than normal in children with autism spectrum disorder, with a volume that was predicted by the pattern of learning from visual and proprioceptive errors. We suggest that the abnormal patterns of motor learning in children with autism spectrum disorder, showing an increased sensitivity to proprioceptive error and a decreased sensitivity to visual error, may be associated with abnormalities in the cerebellum. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
MRI-guided prostate focal laser ablation therapy using a mechatronic needle guidance system
NASA Astrophysics Data System (ADS)
Cepek, Jeremy; Lindner, Uri; Ghai, Sangeet; Davidson, Sean R. H.; Trachtenberg, John; Fenster, Aaron
2014-03-01
Focal therapy of localized prostate cancer is receiving increased attention due to its potential for providing effective cancer control in select patients with minimal treatment-related side effects. Magnetic resonance imaging (MRI)-guided focal laser ablation (FLA) therapy is an attractive modality for such an approach. In FLA therapy, accurate placement of laser fibers is critical to ensuring that the full target volume is ablated. In practice, error in needle placement is invariably present due to pre- to intra-procedure image registration error, needle deflection, prostate motion, and variability in interventionalist skill. In addition, some of these sources of error are difficult to control, since the available workspace and patient positions are restricted within a clinical MRI bore. In an attempt to take full advantage of the utility of intraprocedure MRI, while minimizing error in needle placement, we developed an MRI-compatible mechatronic system for guiding needles to the prostate for FLA therapy. The system has been used to place interstitial catheters for MRI-guided FLA therapy in eight subjects in an ongoing Phase I/II clinical trial. Data from these cases has provided quantification of the level of uncertainty in needle placement error. To relate needle placement error to clinical outcome, we developed a model for predicting the probability of achieving complete focal target ablation for a family of parameterized treatment plans. Results from this work have enabled the specification of evidence-based selection criteria for the maximum target size that can be confidently ablated using this technique, and quantify the benefit that may be gained with improvements in needle placement accuracy.
Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model
NASA Technical Reports Server (NTRS)
Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.
2010-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.
NASA Technical Reports Server (NTRS)
Yen, John; Wang, Haojin; Daugherity, Walter C.
1992-01-01
Fuzzy logic controllers have some often-cited advantages over conventional techniques such as PID control, including easier implementation, accommodation to natural language, and the ability to cover a wider range of operating conditions. One major obstacle that hinders the broader application of fuzzy logic controllers is the lack of a systematic way to develop and modify their rules; as a result the creation and modification of fuzzy rules often depends on trial and error or pure experimentation. One of the proposed approaches to address this issue is a self-learning fuzzy logic controller (SFLC) that uses reinforcement learning techniques to learn the desirability of states and to adjust the consequent part of its fuzzy control rules accordingly. Due to the different dynamics of the controlled processes, the performance of a self-learning fuzzy controller is highly contingent on its design. The design issue has not received sufficient attention. The issues related to the design of a SFLC for application to a petrochemical process are discussed, and its performance is compared with that of a PID and a self-tuning fuzzy logic controller.
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
McEvoy, Peter M; Graville, Rachel; Hayes, Sarra; Kane, Robert T; Foster, Jonathan K
2017-09-01
The first aim of this study was to compare attention manipulation techniques deriving from metacognitive therapy (the Attention Training Technique; ATT) and mindfulness-based approaches (Mindfulness-Based Progressive Muscle Relaxation, MB-PMR) to a thought wandering control (TWC) condition, in terms of their impact on anxiety and four mechanisms: distancing, present-focused attention, uncontrollability and dangerousness, metacognitive beliefs, and cognitive flexibility (Stroop task). The second aim was to test indirect effects of the techniques on anxiety via the mechanism measures. High trait anxious participants (N = 81, M age = 23.60, SD age = 7.66, 80% female) were randomized to receive ATT, MB-PMR, or the TWC condition. Measures of cognitive and somatic anxiety, distancing, present-focused attention, metacognitive beliefs, and cognitive flexibility were administered before or after the attention manipulation task. Compared to the TWC group, ATT and MB-PMR were associated with greater changes on cognitive (but not somatic) anxiety, present-focused attention, metacognitive beliefs, and uncorrected errors for threat-related words on the Stroop task. The pattern of means was similar for distancing, but this did not reach statistical significance, and Stroop speed increased equally for all conditions. Indirect effects models revealed significant effects of condition on state anxiety via distancing, metacognitive beliefs, and present-focused attention, but not via Stroop errors. ATT and MB-PMR were associated with changes on anxiety and the mechanism measures, suggesting that the mechanisms of change may be more similar than different across these techniques. Copyright © 2017. Published by Elsevier Ltd.
Indaram, Maanasa; VanderVeen, Deborah K
2018-01-01
Advances in surgical techniques allow implantation of intraocular lenses (IOL) with cataract extraction, even in young children. However, there are several challenges unique to the pediatric population that result in greater degrees of postoperative refractive error compared to adults. Literature review of the techniques and outcomes of pediatric cataract surgery with IOL implantation. Pediatric cataract surgery is associated with several sources of postoperative refractive error. These include planned refractive error based on age or fellow eye status, loss of accommodation, and unexpected refractive errors due to inaccuracies in biometry technique, use of IOL power formulas based on adult normative values, and late refractive changes due to unpredictable eye growth. Several factors can preclude the achievement of optimal refractive status following pediatric cataract extraction with IOL implantation. There is a need for new technology to reduce postoperative refractive surprises and address refractive adjustment in a growing eye.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
Online measurement of bead geometry in GMAW-based additive manufacturing using passive vision
NASA Astrophysics Data System (ADS)
Xiong, Jun; Zhang, Guangjun
2013-11-01
Additive manufacturing based on gas metal arc welding is an advanced technique for depositing fully dense components with low cost. Despite this fact, techniques to achieve accurate control and automation of the process have not yet been perfectly developed. The online measurement of the deposited bead geometry is a key problem for reliable control. In this work a passive vision-sensing system, comprising two cameras and composite filtering techniques, was proposed for real-time detection of the bead height and width through deposition of thin walls. The nozzle to the top surface distance was monitored for eliminating accumulated height errors during the multi-layer deposition process. Various image processing algorithms were applied and discussed for extracting feature parameters. A calibration procedure was presented for the monitoring system. Validation experiments confirmed the effectiveness of the online measurement system for bead geometry in layered additive manufacturing.
Yeom, Jeong Seon; Jung, Bang Chul; Jin, Hu
2018-01-01
In this paper, we propose a novel low-complexity multi-user superposition transmission (MUST) technique for 5G downlink networks, which allows multiple cell-edge users to be multiplexed with a single cell-center user. We call the proposed technique diversity-controlled MUST technique since the cell-center user enjoys the frequency diversity effect via signal repetition over multiple orthogonal frequency division multiplexing (OFDM) sub-carriers. We assume that a base station is equipped with a single antenna but users are equipped with multiple antennas. In addition, we assume that the quadrature phase shift keying (QPSK) modulation is used for users. We mathematically analyze the bit error rate (BER) of both cell-edge users and cell-center users, which is the first theoretical result in the literature to the best of our knowledge. The mathematical analysis is validated through extensive link-level simulations. PMID:29439413
Yeom, Jeong Seon; Chu, Eunmi; Jung, Bang Chul; Jin, Hu
2018-02-10
In this paper, we propose a novel low-complexity multi-user superposition transmission (MUST) technique for 5G downlink networks, which allows multiple cell-edge users to be multiplexed with a single cell-center user. We call the proposed technique diversity-controlled MUST technique since the cell-center user enjoys the frequency diversity effect via signal repetition over multiple orthogonal frequency division multiplexing (OFDM) sub-carriers. We assume that a base station is equipped with a single antenna but users are equipped with multiple antennas. In addition, we assume that the quadrature phase shift keying (QPSK) modulation is used for users. We mathematically analyze the bit error rate (BER) of both cell-edge users and cell-center users, which is the first theoretical result in the literature to the best of our knowledge. The mathematical analysis is validated through extensive link-level simulations.
Generation of digitized microfluidic filling flow by vent control.
Yoon, Junghyo; Lee, Eundoo; Kim, Jaehoon; Han, Sewoon; Chung, Seok
2017-06-15
Quantitative microfluidic point-of-care testing has been translated into clinical applications to support a prompt decision on patient treatment. A nanointerstice-driven filling technique has been developed to realize the fast and robust filling of microfluidic channels with liquid samples, but it has failed to provide a consistent filling time owing to the wide variation in liquid viscosity, resulting in an increase in quantification errors. There is a strong demand for simple and quick flow control to ensure accurate quantification, without a serious increase in system complexity. A new control mechanism employing two-beam refraction and one solenoid valve was developed and found to successfully generate digitized filling flow, completely free from errors due to changes in viscosity. The validity of digitized filling flow was evaluated by the immunoassay, using liquids with a wide range of viscosity. This digitized microfluidic filling flow is a novel approach that could be applied in conventional microfluidic point-of-care testing. Copyright © 2016 Elsevier B.V. All rights reserved.
Computer Controlled Optical Surfacing With Orbital Tool Motion
NASA Astrophysics Data System (ADS)
Jones, Robert A.
1985-10-01
Asymmetric aspheric optical surfaces are very difficult to fabricate using classical techniques and laps the same size as the workpiece. Opticians can produce such surfaces by grinding and polishing, using small laps with orbital tool motion. However, hand correction is a time consuming process unsuitable for large optical elements. Itek has developed Computer Controlled Optical Surfacing (CCOS) for fabricating such aspheric optics. Automated equipment moves a nonrotating orbiting tool slowly over the workpiece surface. The process corrects low frequency surface errors by figuring. The velocity of the tool assembly over the workpiece surface is purposely varied. Since the amount of material removal is proportional to the polishing or grinding time, accurate control over material removal is achieved. The removal of middle and high frequency surface errors is accomplished by pad smoothing. For a soft pad material, the pad will compress to fit the workpiece surface producing greater pressure and more removal at the surface high areas. A harder pad will ride on only the high regions resulting in removal only for those locations.
Active vibration control of a single-stage spur gearbox
NASA Astrophysics Data System (ADS)
Dogruer, C. U.; Pirsoltan, Abbas K.
2017-02-01
The dynamic transmission error between driving and driven gears of a gear mechanism with torsional mode is induced by periodic time-varying mesh stiffness. In this study, to minimize the adverse effect of this time-varying mesh stiffness, a nonlinear controller which adjusts the torque acting on the driving gear is proposed. The basic approach is to modulate the input torque such that it compensates the periodic change in mesh stiffness. It is assumed that gears are assembled with high precision and gearbox is analyzed by a finite element software to calculate the mesh stiffness curve. Thus, change in the mesh stiffness, which is inherently nonlinear, can be predicted and canceled by a feed-forward loop. Then, remaining linear dynamics is controlled by pole placement techniques. Under these premises, it is claimed that any acceleration and velocity profile of the input shaft can be tracked accurately. Thereby, dynamic transmission error is kept to a minimum possible value and a spur gearbox, which does not emit much noise and vibration, is designed.
The interval testing procedure: A general framework for inference in functional data analysis.
Pini, Alessia; Vantini, Simone
2016-09-01
We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package. © 2016, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Altug, Erdinc
Our work proposes a vision-based stabilization and output tracking control method for a model helicopter. This is a part of our effort to produce a rotorcraft based autonomous Unmanned Aerial Vehicle (UAV). Due to the desired maneuvering ability, a four-rotor helicopter has been chosen as the testbed. On previous research on flying vehicles, vision is usually used as a secondary sensor. Unlike previous research, our goal is to use visual feedback as the main sensor, which is not only responsible for detecting where the ground objects are but also for helicopter localization. A novel two-camera method has been introduced for estimating the full six degrees of freedom (DOF) pose of the helicopter. This two-camera system consists of a pan-tilt ground camera and an onboard camera. The pose estimation algorithm is compared through simulation to other methods, such as four-point, and stereo method and is shown to be less sensitive to feature detection errors. Helicopters are highly unstable flying vehicles; although this is good for agility, it makes the control harder. To build an autonomous helicopter, two methods of control are studied---one using a series of mode-based, feedback linearizing controllers and the other using a back-stepping control law. Various simulations with 2D and 3D models demonstrate the implementation of these controllers. We also show global convergence of the 3D quadrotor controller even with large calibration errors or presence of large errors on the image plane. Finally, we present initial flight experiments where the proposed pose estimation algorithm and non-linear control techniques have been implemented on a remote-controlled helicopter. The helicopter was restricted with a tether to vertical, yaw motions and limited x and y translations.
Error protection capability of space shuttle data bus designs
NASA Technical Reports Server (NTRS)
Proch, G. E.
1974-01-01
Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.
Fuzzy-Estimation Control for Improvement Microwave Connection for Iraq Electrical Grid
NASA Astrophysics Data System (ADS)
Hoomod, Haider K.; Radi, Mohammed
2018-05-01
The demand for broadband wireless services is increasing day by day (as internet or radio broadcast and TV etc.) for this reason and optimal exploiting for this bandwidth may be other reasons indeed be there is problem in the communication channels. it’s necessary that exploiting the good part form this bandwidth. In this paper, we propose to use estimation technique for estimate channel availability in that moment and next one to know the error in the bandwidth channel for controlling the possibility data transferring through the channel. The proposed estimation based on the combination of the least Minimum square (LMS), Standard Kalman filter, and Modified Kalman filter. The error estimation in channel use as control parameter in fuzzy rules to adjusted the rate and size sending data through the network channel, and rearrangement the priorities of the buffered data (workstation control parameters, Texts, phone call, images, and camera video) for the worst cases of error in channel. The propose system is designed to management data communications through the channels connect among the Iraqi electrical grid stations. The proposed results show that the modified Kalman filter have a best result in time and noise estimation (0.1109 for 5% noise estimation to 0.3211 for 90% noise estimation) and the packets loss rate is reduced with ratio from (35% to 385%).
NASA Astrophysics Data System (ADS)
Luo, Jianjun; Wei, Caisheng; Dai, Honghua; Yuan, Jianping
2018-03-01
This paper focuses on robust adaptive control for a class of uncertain nonlinear systems subject to input saturation and external disturbance with guaranteed predefined tracking performance. To reduce the limitations of classical predefined performance control method in the presence of unknown initial tracking errors, a novel predefined performance function with time-varying design parameters is first proposed. Then, aiming at reducing the complexity of nonlinear approximations, only two least-square-support-vector-machine-based (LS-SVM-based) approximators with two design parameters are required through norm form transformation of the original system. Further, a novel LS-SVM-based adaptive constrained control scheme is developed under the time-vary predefined performance using backstepping technique. Wherein, to avoid the tedious analysis and repeated differentiations of virtual control laws in the backstepping technique, a simple and robust finite-time-convergent differentiator is devised to only extract its first-order derivative at each step in the presence of external disturbance. In this sense, the inherent demerit of backstepping technique-;explosion of terms; brought by the recursive virtual controller design is conquered. Moreover, an auxiliary system is designed to compensate the control saturation. Finally, three groups of numerical simulations are employed to validate the effectiveness of the newly developed differentiator and the proposed adaptive constrained control scheme.
NASA Astrophysics Data System (ADS)
Xia, Huanxiong; Xiang, Dong; Yang, Wang; Mou, Peng
2014-12-01
Low-temperature plasma technique is one of the critical techniques in IC manufacturing process, such as etching and thin-film deposition, and the uniformity greatly impacts the process quality, so the design for the plasma uniformity control is very important but difficult. It is hard to finely and flexibly regulate the spatial distribution of the plasma in the chamber via controlling the discharge parameters or modifying the structure in zero-dimensional space, and it just can adjust the overall level of the process factors. In the view of this problem, a segmented non-uniform dielectric module design solution is proposed for the regulation of the plasma profile in a CCP chamber. The solution achieves refined and flexible regulation of the plasma profile in the radial direction via configuring the relative permittivity and the width of each segment. In order to solve this design problem, a novel simulation-based auto-design approach is proposed, which can automatically design the positional sequence with multi independent variables to make the output target profile in the parameterized simulation model approximate the one that users preset. This approach employs an idea of quasi-closed-loop control system, and works in an iterative mode. It starts from initial values of the design variable sequences, and predicts better sequences via the feedback of the profile error between the output target profile and the expected one. It never stops until the profile error is narrowed in the preset tolerance.
Validating a UAV artificial intelligence control system using an autonomous test case generator
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Huber, Justin
2013-05-01
The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.
NASA Astrophysics Data System (ADS)
Jiang, Shengqin; Lu, Xiaobo; Cai, Guoliang; Cai, Shuiming
2017-12-01
This paper focuses on the cluster synchronisation problem of coupled complex networks with uncertain disturbances under an adaptive fixed-time control strategy. To begin with, complex dynamical networks with community structure which are subject to uncertain disturbances are taken into account. Then, a novel adaptive control strategy combined with fixed-time techniques is proposed to guarantee the nodes in the communities to desired states in a settling time. In addition, the stability of complex error systems is theoretically proved based on Lyapunov stability theorem. At last, two examples are presented to verify the effectiveness of the proposed adaptive fixed-time control.
Adiabatic leakage elimination operator in an experimental framework
NASA Astrophysics Data System (ADS)
Wang, Zhao-Ming; Byrd, Mark S.; Jing, Jun; Wu, Lian-Ao
2018-06-01
Adiabatic evolution is used in a variety of quantum information processing tasks. However, the elimination of errors is not as well developed as it is for circuit model processing. Here, we present a strategy to improve the performance of a quantum adiabatic process by adding leakage elimination operators (LEOs) to the evolution. These are a sequence of pulse controls acting in an adiabatic subspace to eliminate errors by suppressing unwanted transitions. Using the Feshbach P Q partitioning technique, we obtain an analytical solution for a set of pulse controls. The effectiveness of the LEO is independent of the specific form of the pulse but depends on the average frequency of the control function. By observing that the evolution of the target eigenstate is governed by a periodic function appearing in the integral of the control function, we show that control parameters can be chosen in such a way that the instantaneous eigenstates of the system are unchanged, yet a speedup can be achieved by suppressing transitions. Furthermore, we give the exact expression of the control function for a counter unitary transformation to be used in experiments which provides a clear physical meaning for the LEO, aiding in the implementation.
A genetic algorithms approach for altering the membership functions in fuzzy logic controllers
NASA Technical Reports Server (NTRS)
Shehadeh, Hana; Lea, Robert N.
1992-01-01
Through previous work, a fuzzy control system was developed to perform translational and rotational control of a space vehicle. This problem was then re-examined to determine the effectiveness of genetic algorithms on fine tuning the controller. This paper explains the problems associated with the design of this fuzzy controller and offers a technique for tuning fuzzy logic controllers. A fuzzy logic controller is a rule-based system that uses fuzzy linguistic variables to model human rule-of-thumb approaches to control actions within a given system. This 'fuzzy expert system' features rules that direct the decision process and membership functions that convert the linguistic variables into the precise numeric values used for system control. Defining the fuzzy membership functions is the most time consuming aspect of the controller design. One single change in the membership functions could significantly alter the performance of the controller. This membership function definition can be accomplished by using a trial and error technique to alter the membership functions creating a highly tuned controller. This approach can be time consuming and requires a great deal of knowledge from human experts. In order to shorten development time, an iterative procedure for altering the membership functions to create a tuned set that used a minimal amount of fuel for velocity vector approach and station-keep maneuvers was developed. Genetic algorithms, search techniques used for optimization, were utilized to solve this problem.
Ahmed, Ashik; Al-Amin, Rasheduzzaman; Amin, Ruhul
2014-01-01
This paper proposes designing of Static Synchronous Series Compensator (SSSC) based damping controller to enhance the stability of a Single Machine Infinite Bus (SMIB) system by means of Invasive Weed Optimization (IWO) technique. Conventional PI controller is used as the SSSC damping controller which takes rotor speed deviation as the input. The damping controller parameters are tuned based on time integral of absolute error based cost function using IWO. Performance of IWO based controller is compared to that of Particle Swarm Optimization (PSO) based controller. Time domain based simulation results are presented and performance of the controllers under different loading conditions and fault scenarios is studied in order to illustrate the effectiveness of the IWO based design approach.
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Error analysis of multi-needle Langmuir probe measurement technique.
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
Error analysis of multi-needle Langmuir probe measurement technique
NASA Astrophysics Data System (ADS)
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
Reducing Formation-Keeping Maneuver Costs for Formation Flying Satellites in Low-Earth Orbit
NASA Technical Reports Server (NTRS)
Hamilton, Nicholas
2001-01-01
Several techniques are used to synthesize the formation-keeping control law for a three-satellite formation in low-earth orbit. The objective is to minimize maneuver cost and position tracking error. Initial reductions are found for a one-satellite case by tuning the state-weighting matrix within the linear-quadratic-Gaussian framework. Further savings come from adjusting the maneuver interval. Scenarios examined include cases with and without process noise. These results are then applied to a three-satellite formation. For both the one-satellite and three-satellite cases, increasing the maneuver interval yields a decrease in maneuver cost and an increase in position tracking error. A maneuver interval of 8-10 minutes provides a good trade-off between maneuver cost and position tracking error. An analysis of the closed-loop poles with respect to varying maneuver intervals explains the effectiveness of the chosen maneuver interval.
Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques
NASA Astrophysics Data System (ADS)
Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.
1999-08-01
A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.
High efficiency and simple technique for controlling mechanisms by EMG signals
NASA Astrophysics Data System (ADS)
Dugarte, N.; Álvarez, A.; Balacco, J.; Mercado, G.; Gonzalez, A.; Dugarte, E.; Javier, F.; Ceballos, G.; Olivares, A.
2016-04-01
This article reports the development of a simple and efficient system that allows control of mechanisms through electromyography (EMG) signals. The novelty about this instrument is focused on individual control of each motion vector mechanism through independent electronic circuits. Each of electronic circuit does positions a motor according to intensity of EMG signal captured. This action defines movement in one mechanical axis considered from an initial point, based on increased muscle tension. The final displacement of mechanism depends on individual’s ability to handle the levels of muscle tension at different body parts. This is the design of a robotic arm where each degree of freedom is handled with a specific microcontroller that responds to signals taken from a defined muscle. The biophysical interaction between the person and the final positioning of the robotic arm is used as feedback. Preliminary tests showed that the control operates with minimal positioning error margins. The constant use of system with the same operator showed that the person adapts and progressively improves at control technique.
NASA Technical Reports Server (NTRS)
Broussard, J. R.; Halyo, N.
1984-01-01
This report contains the development of a digital outer-loop three dimensional radio navigation (3-D RNAV) flight control system for a small commercial jet transport. The outer-loop control system is designed using optimal stochastic limited state feedback techniques. Options investigated using the optimal limited state feedback approach include integrated versus hierarchical control loop designs, 20 samples per second versus 5 samples per second outer-loop operation and alternative Type 1 integration command errors. Command generator tracking techniques used in the digital control design enable the jet transport to automatically track arbitrary curved flight paths generated by waypoints. The performance of the design is demonstrated using detailed nonlinear aircraft simulations in the terminal area, frequency domain multi-input sigma plots, frequency domain single-input Bode plots and closed-loop poles. The response of the system to a severe wind shear during a landing approach is also presented.
An interplanetary targeting and orbit insertion maneuver design technique
NASA Technical Reports Server (NTRS)
Hintz, G. R.
1980-01-01
The paper describes a tradeoff in selecting a planetary encounter aimpoint and a spacecraft propulsive maneuver strategy in the Pioneer Venus Orbiter Mission. The method uses parametric data spanning a region of acceptable targeting aimpoints in the delivery space and the geometric considerations. Real-time maneuver adjustments accounted for known attitude control errors, orbit determination updates, and late changes in a targeting specification.
NASA Astrophysics Data System (ADS)
Howie, Philip V.
1993-04-01
The MD Explorer is an eight-seat twin-turbine engine helicopter which is being developed using integrated product definition (IPD) team methodology. New techniques include NOTAR antitorque system for directional control, a composite fuselage, an all-composite bearingless main rotor, and digital cockpit displays. Three-dimensional CAD models are the basis of the entire Explorer design. Solid models provide vendor with design clarification, removing much of the normal drawing interpretation errors.
Long, Lijun; Zhao, Jun
2017-07-01
In this paper, the problem of adaptive neural output-feedback control is addressed for a class of multi-input multioutput (MIMO) switched uncertain nonlinear systems with unknown control gains. Neural networks (NNs) are used to approximate unknown nonlinear functions. In order to avoid the conservativeness caused by adoption of a common observer for all subsystems, an MIMO NN switched observer is designed to estimate unmeasurable states. A new switched observer-based adaptive neural control technique for the problem studied is then provided by exploiting the classical average dwell time (ADT) method and the backstepping method and the Nussbaum gain technique. It effectively handles the obstacle about the coexistence of multiple Nussbaum-type function terms, and improves the classical ADT method, since the exponential decline property of Lyapunov functions for individual subsystems is no longer satisfied. It is shown that the technique proposed is able to guarantee semiglobal uniformly ultimately boundedness of all the signals in the closed-loop system under a class of switching signals with ADT, and the tracking errors converge to a small neighborhood of the origin. The effectiveness of the approach proposed is illustrated by its application to a two inverted pendulum system.
NASA Technical Reports Server (NTRS)
Simpson, C. A.
1985-01-01
In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.
Terminal sliding mode tracking control for a class of SISO uncertain nonlinear systems.
Chen, Mou; Wu, Qing-Xian; Cui, Rong-Xin
2013-03-01
In this paper, the terminal sliding mode tracking control is proposed for the uncertain single-input and single-output (SISO) nonlinear system with unknown external disturbance. For the unmeasured disturbance of nonlinear systems, terminal sliding mode disturbance observer is presented. The developed disturbance observer can guarantee the disturbance approximation error to converge to zero in the finite time. Based on the output of designed disturbance observer, the terminal sliding mode tracking control is presented for uncertain SISO nonlinear systems. Subsequently, terminal sliding mode tracking control is developed using disturbance observer technique for the uncertain SISO nonlinear system with control singularity and unknown non-symmetric input saturation. The effects of the control singularity and unknown input saturation are combined with the external disturbance which is approximated using the disturbance observer. Under the proposed terminal sliding mode tracking control techniques, the finite time convergence of all closed-loop signals are guaranteed via Lyapunov analysis. Numerical simulation results are given to illustrate the effectiveness of the proposed terminal sliding mode tracking control. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Structural power flow measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falter, K.J.; Keltie, R.F.
Previous investigations of structural power flow through beam-like structures resulted in some unexplained anomalies in the calculated data. In order to develop structural power flow measurement as a viable technique for machine tool design, the causes of these anomalies needed to be found. Once found, techniques for eliminating the errors could be developed. Error sources were found in the experimental apparatus itself as well as in the instrumentation. Although flexural waves are the carriers of power in the experimental apparatus, at some frequencies longitudinal waves were excited which were picked up by the accelerometers and altered power measurements. Errors weremore » found in the phase and gain response of the sensors and amplifiers used for measurement. A transfer function correction technique was employed to compensate for these instrumentation errors.« less
NASA Technical Reports Server (NTRS)
1981-01-01
Presentations of a conference on the use of ruggedized minicomputers are summarized. The following topics are discussed: (1) the role of minicomputers in the development and/or certification of commercial or military airplanes in both the United States and Europe; (2) generalized software error detection techniques; (3) real time software development tools; (4) a redundancy management research tool for aircraft navigation/flight control sensors; (5) extended memory management techniques using a high order language; and (6) some comments on establishing a system maintenance scheme. Copies of presentation slides are also included.
Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1978-01-01
The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection
Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-01-01
Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.
Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-08-18
The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.
Robust leader-follower formation tracking control of multiple underactuated surface vessels
NASA Astrophysics Data System (ADS)
Peng, Zhou-hua; Wang, Dan; Lan, Wei-yao; Sun, Gang
2012-09-01
This paper is concerned with the formation control problem of multiple underactuated surface vessels moving in a leader-follower formation. The formation is achieved by the follower to track a virtual target defined relative to the leader. A robust adaptive target tracking law is proposed by using neural network and backstepping techniques. The advantage of the proposed control scheme is that the uncertain nonlinear dynamics caused by Coriolis/centripetal forces, nonlinear damping, unmodeled hydrodynamics and disturbances from the environment can be compensated by on line learning. Based on Lyapunov analysis, the proposed controller guarantees the tracking errors converge to a small neighborhood of the origin. Simulation results demonstrate the effectiveness of the control strategy.
Preliminary experiments on active control of fan noise from a turbofan engine
NASA Technical Reports Server (NTRS)
Thomas, R. H.; Burdisso, R. A.; Fuller, C. R.; O'Brien, W. F.
1993-01-01
In the preliminary experiments reported here, active acoustic sources positioned around the circumference of a turbofan engine were used to control the fan noise radiated forward through the inlet. The main objective was to demonstrate the potential of active techniques to alleviate the noise pollution that will be produced by the next generation of larger engines. A reduction of up to 19 dB in the radiation directivity was demonstrated in a zone that encompasses a 30-deg angle, near the error sensor, while spillover effects were observed toward the lateral direction. The simultaneous control of two tones was also demonstrated using two identical controllers in a parallel control configuration.
NASA Astrophysics Data System (ADS)
Sandoz, J.-P.; Steenaart, W.
1984-12-01
The nonuniform sampling digital phase-locked loop (DPLL) with sequential loop filter, in which the correction sizes are controlled by the accumulated differences of two additional phase comparators, is graphically analyzed. In the absence of noise and frequency drift, the analysis gives some physical insight into the acquisition and tracking behavior. Taking noise into account, a mathematical model is derived and a random walk technique is applied to evaluate the rms phase error and the mean acquisition time. Experimental results confirm the appropriate simplifying hypotheses used in the numerical analysis. Two related performance measures defined in terms of the rms phase error and the acquisition time for a given SNR are used. These measures provide a common basis for comparing different digital loops and, to a limited extent, also with a first-order linear loop. Finally, the behavior of a modified DPLL under frequency deviation in the presence of Gaussian noise is tested experimentally and by computer simulation.
NASA Technical Reports Server (NTRS)
Bernacki, Bruce E.; Mansuripur, M.
1992-01-01
A commonly used tracking method on pre-grooved magneto-optical (MO) media is the push-pull technique, and the astigmatic method is a popular focus-error detection approach. These two methods are analyzed using DIFFRACT, a general-purpose scalar diffraction modeling program, to observe the effects on the error signals due to focusing lens misalignment, Seidel aberrations, and optical crosstalk (feedthrough) between the focusing and tracking servos. Using the results of the astigmatic/push-pull system as a basis for comparison, a novel focus/track-error detection technique that utilizes a ring toric lens is evaluated as well as the obscuration method (focus error detection only).
Fringe Capacitance Correction for a Coaxial Soil Cell
Pelletier, Mathew G.; Viera, Joseph A.; Schwartz, Robert C.; Lascano, Robert J.; Evett, Steven R.; Green, Tim R.; Wanjura, John D.; Holt, Greg A.
2011-01-01
Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR) as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications. This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the fringe capacitance which is then used to correct the experimental results such that experimental measurements utilizing differing coaxial cell diameters and probe lengths, upon correction with the Poisson model derived correction factor, all produce the same results thereby lending support and for an augmented measurement technique for measurement of absolute permittivity. PMID:22346601
A Multigrid NLS-4DVar Data Assimilation Scheme with Advanced Research WRF (ARW)
NASA Astrophysics Data System (ADS)
Zhang, H.; Tian, X.
2017-12-01
The motions of the atmosphere have multiscale properties in space and/or time, and the background error covariance matrix (Β) should thus contain error information at different correlation scales. To obtain an optimal analysis, the multigrid three-dimensional variational data assimilation scheme is used widely when sequentially correcting errors from large to small scales. However, introduction of the multigrid technique into four-dimensional variational data assimilation is not easy, due to its strong dependence on the adjoint model, which has extremely high computational costs in data coding, maintenance, and updating. In this study, the multigrid technique was introduced into the nonlinear least-squares four-dimensional variational assimilation (NLS-4DVar) method, which is an advanced four-dimensional ensemble-variational method that can be applied without invoking the adjoint models. The multigrid NLS-4DVar (MG-NLS-4DVar) scheme uses the number of grid points to control the scale, with doubling of this number when moving from a coarse to a finer grid. Furthermore, the MG-NLS-4DVar scheme not only retains the advantages of NLS-4DVar, but also sufficiently corrects multiscale errors to achieve a highly accurate analysis. The effectiveness and efficiency of the proposed MG-NLS-4DVar scheme were evaluated by several groups of observing system simulation experiments using the Advanced Research Weather Research and Forecasting Model. MG-NLS-4DVar outperformed NLS-4DVar, with a lower computational cost.
Tahoun, A H
2017-01-01
In this paper, the stabilization problem of actuators saturation in uncertain chaotic systems is investigated via an adaptive PID control method. The PID control parameters are auto-tuned adaptively via adaptive control laws. A multi-level augmented error is designed to account for the extra terms appearing due to the use of PID and saturation. The proposed control technique uses both the state-feedback and the output-feedback methodologies. Based on Lyapunov׳s stability theory, new anti-windup adaptive controllers are proposed. Demonstrative examples with MATLAB simulations are studied. The simulation results show the efficiency of the proposed adaptive PID controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
On the application of photogrammetry to the fitting of jawbone-anchored bridges.
Strid, K G
1985-01-01
Misfit between a jawbone-anchored bridge and the abutments in the patient's jaw may result in, for example, fixture fracture. To achieve improved alignment, the bridge base could be prepared in a numerically-controlled tooling machine using measured abutment coordinates as primary data. For each abutment, the measured values must comprise the coordinates of a reference surface as well as the spatial orientation of the fixture/abutment longitudinal axis. Stereophotogrammetry was assumed to be the measuring method of choice. To assess its potentials, a lower-jaw model with accurately positioned signals was stereophotographed and the films were measured in a stereocomparator. Model-space coordinates, computed from the image coordinates, were compared to the known signal coordinates. The root-mean-square error in position was determined to 0.03-0.08 mm, the maximum individual error amounting to 0.12 mm, whereas the r. m. s. error in axis direction was found to be 0.5-1.5 degrees with a maximum individual error of 1.8 degrees. These errors are of the same order as can be achieved by careful impression techniques. The method could be useful, but because of its complexity, stereophotogrammetry is not recommended as a standard procedure.
Counteracting structural errors in ensemble forecast of influenza outbreaks.
Pei, Sen; Shaman, Jeffrey
2017-10-13
For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
A variational regularization of Abel transform for GPS radio occultation
NASA Astrophysics Data System (ADS)
Wee, Tae-Kwon
2018-04-01
In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.
Biometrics based key management of double random phase encoding scheme using error control codes
NASA Astrophysics Data System (ADS)
Saini, Nirmala; Sinha, Aloka
2013-08-01
In this paper, an optical security system has been proposed in which key of the double random phase encoding technique is linked to the biometrics of the user to make it user specific. The error in recognition due to the biometric variation is corrected by encoding the key using the BCH code. A user specific shuffling key is used to increase the separation between genuine and impostor Hamming distance distribution. This shuffling key is then further secured using the RSA public key encryption to enhance the security of the system. XOR operation is performed between the encoded key and the feature vector obtained from the biometrics. The RSA encoded shuffling key and the data obtained from the XOR operation are stored into a token. The main advantage of the present technique is that the key retrieval is possible only in the simultaneous presence of the token and the biometrics of the user which not only authenticates the presence of the original input but also secures the key of the system. Computational experiments showed the effectiveness of the proposed technique for key retrieval in the decryption process by using the live biometrics of the user.
Self-optimization and auto-stabilization of receiver in DPSK transmission system.
Jang, Y S
2008-03-17
We propose a self-optimization and auto-stabilization method for a 1-bit DMZI in DPSK transmission. Using the characteristics of eye patterns, the optical frequency transmittance of a 1-bit DMZI is thermally controlled to maximize the power difference between the constructive and destructive output ports. Unlike other techniques, this control method can be realized without additional components, making it simple and cost effective. Experimental results show that error-free performance is maintained when the carrier optical frequency variation is approximately 10% of the data rate.
Frequency division multiplex technique
NASA Technical Reports Server (NTRS)
Brey, H. (Inventor)
1973-01-01
A system for monitoring a plurality of condition responsive devices is described. It consists of a master control station and a remote station. The master control station is capable of transmitting command signals which includes a parity signal to a remote station which transmits the signals back to the command station so that such can be compared with the original signals in order to determine if there are any transmission errors. The system utilizes frequency sources which are 1.21 multiples of each other so that no linear combination of any harmonics will interfere with another frequency.
Hua, Changchun; Zhang, Liuliu; Guan, Xinping
2017-01-01
This paper studies the problem of distributed output tracking consensus control for a class of high-order stochastic nonlinear multiagent systems with unknown nonlinear dead-zone under a directed graph topology. The adaptive neural networks are used to approximate the unknown nonlinear functions and a new inequality is used to deal with the completely unknown dead-zone input. Then, we design the controllers based on backstepping method and the dynamic surface control technique. It is strictly proved that the resulting closed-loop system is stable in probability in the sense of semiglobally uniform ultimate boundedness and the tracking errors between the leader and the followers approach to a small residual set based on Lyapunov stability theory. Finally, two simulation examples are presented to show the effectiveness and the advantages of the proposed techniques.
Active control of sound transmission through partitions composed of discretely controlled modules
NASA Astrophysics Data System (ADS)
Leishman, Timothy W.
This thesis provides a detailed theoretical and experimental investigation of active segmented partitions (ASPs) for the control of sound transmission. ASPs are physically segmented arrays of interconnected acoustically and structurally small modules that are discretely controlled using electronic controllers. Theoretical analyses of the thesis first address physical principles fundamental to ASP modeling and experimental measurement techniques. Next, they explore specific module configurations, primarily using equivalent circuits. Measured normal-incidence transmission losses and related properties of experimental ASPs are determined using plane wave tubes and the two-microphone transfer function technique. A scanning laser vibrometer is also used to evaluate distributed transmitting surface vibrations. ASPs have the inherent potential to provide excellent active sound transmission control (ASTC) through lightweight structures, using very practical control strategies. The thesis analyzes several unique ASP configurations and evaluates their abilities to produce high transmission losses via global minimization of normal transmitting surface vibrations. A novel dual diaphragm configuration is shown to employ this strategy particularly well. It uses an important combination of acoustical actuation and mechano-acoustical segmentation to produce exceptionally high transmission loss (e.g., 50 to 80 dB) over a broad frequency range-including lower audible frequencies. Such performance is shown to be comparable to that produced by much more massive partitions composed of thick layers of steel or concrete and sand. The configuration uses only simple localized error sensors and actuators, permitting effective use of independent single-channel controllers in a decentralized format. This work counteracts the commonly accepted notion that active vibration control of partitions is an ineffective means of controlling sound transmission. With appropriate construction, actuation, and error sensing, ASPs can achieve high sound transmission loss through efficient global control of transmitting surface vibrations. This approach is applicable to a wide variety of source and receiving spaces-and to both near fields and far fields.
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
String Stability of a Linear Formation Flight Control System
NASA Technical Reports Server (NTRS)
Allen, Michael J.; Ryan, Jack; Hanson, Curtis E.; Parle, James F.
2002-01-01
String stability analysis of an autonomous formation flight system was performed using linear and nonlinear simulations. String stability is a measure of how position errors propagate from one vehicle to another in a cascaded system. In the formation flight system considered here, each i(sup th) aircraft uses information from itself and the preceding ((i-1)(sup th)) aircraft to track a commanded relative position. A possible solution for meeting performance requirements with such a system is to allow string instability. This paper explores two results of string instability and outlines analysis techniques for string unstable systems. The three analysis techniques presented here are: linear, nonlinear formation performance, and ride quality. The linear technique was developed from a worst-case scenario and could be applied to the design of a string unstable controller. The nonlinear formation performance and ride quality analysis techniques both use nonlinear formation simulation. Three of the four formation-controller gain-sets analyzed in this paper were limited more by ride quality than by performance. Formations of up to seven aircraft in a cascaded formation could be used in the presence of light gusts with this string unstable system.
Rejman, Marek
2013-01-01
The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key points The monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers. The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed. Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed. The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated. PMID:24149742
System reliability and recovery.
DOT National Transportation Integrated Search
1971-06-01
The paper exhibits a variety of reliability techniques applicable to future ATC data processing systems. Presently envisioned schemes for error detection, error interrupt and error analysis are considered, along with methods of retry, reconfiguration...
Error compensation for hybrid-computer solution of linear differential equations
NASA Technical Reports Server (NTRS)
Kemp, N. H.
1970-01-01
Z-transform technique compensates for digital transport delay and digital-to-analog hold. Method determines best values for compensation constants in multi-step and Taylor series projections. Technique also provides hybrid-calculation error compared to continuous exact solution, plus system stability properties.
Shawahna, Ramzi; Masri, Dina; Al-Gharabeh, Rawan; Deek, Rawan; Al-Thayba, Lama; Halaweh, Masa
2016-02-01
To develop and achieve formal consensus on a definition of medication administration errors and scenarios that should or should not be considered as medication administration errors in hospitalised patient settings. Medication administration errors occur frequently in hospitalised patient settings. Currently, there is no formal consensus on a definition of medication administration errors or scenarios that should or should not be considered as medication administration errors. This was a descriptive study using Delphi technique. A panel of experts (n = 50) recruited from major hospitals, nursing schools and universities in Palestine took part in the study. Three Delphi rounds were followed to achieve consensus on a proposed definition of medication administration errors and a series of 61 scenarios representing potential medication administration error situations formulated into a questionnaire. In the first Delphi round, key contact nurses' views on medication administration errors were explored. In the second Delphi round, consensus was achieved to accept the proposed definition of medication administration errors and to include 36 (59%) scenarios and exclude 1 (1·6%) as medication administration errors. In the third Delphi round, consensus was achieved to consider further 14 (23%) and exclude 2 (3·3%) as medication administration errors while the remaining eight (13·1%) were considered equivocal. Of the 61 scenarios included in the Delphi process, experts decided to include 50 scenarios as medication administration errors, exclude three scenarios and include or exclude eight scenarios depending on the individual clinical situation. Consensus on a definition and scenarios representing medication administration errors can be achieved using formal consensus techniques. Researchers should be aware that using different definitions of medication administration errors, inclusion or exclusion of medication administration error situations could significantly affect the rate of medication administration errors reported in their studies. Consensual definitions and medication administration error situations can be used in future epidemiology studies investigating medication administration errors in hospitalised patient settings which may permit and promote direct comparisons of different studies. © 2015 John Wiley & Sons Ltd.
Age estimation from dental cementum incremental lines and periodontal disease.
Dias, P E M; Beaini, T L; Melani, R F H
2010-12-01
Age estimation by counting incremental lines in cementum added to the average age of tooth eruption is considered an accurate method by some authors, while others reject it stating weak correlation between estimated and actual age. The aim of this study was to evaluate this technique and check the influence of periodontal disease on age estimates by analyzing both the number of cementum lines and the correlation between cementum thickness and actual age on freshly extracted teeth. Thirty one undecalcified ground cross sections of approximately 30 µm, from 25 teeth were prepared, observed, photographed and measured. Images were enhanced by software and counts were made by one observer, and the results compared with two control-observers. There was moderate correlation ((r)=0.58) for the entire sample, with mean error of 9.7 years. For teeth with periodontal pathologies, correlation was 0.03 with a mean error of 22.6 years. For teeth without periodontal pathologies, correlation was 0.74 with mean error of 1.6 years. There was correlation of 0.69 between cementum thickness and known age for the entire sample, 0.25 for teeth with periodontal problems and 0.75 for teeth without periodontal pathologies. The technique was reliable for periodontally sound teeth, but not for periodontally diseased teeth.
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
NASA Astrophysics Data System (ADS)
Khallaf, Haitham S.; Elfiqi, Abdulaziz E.; Shalaby, Hossam M. H.; Sampei, Seiichi; Obayya, Salah S. A.
2018-06-01
We investigate the performance of hybrid L-ary quadrature-amplitude modulation-multi-pulse pulse-position modulation (LQAM-MPPM) techniques over exponentiated Weibull (EW) fading free-space optical (FSO) channel, considering both weather and pointing-error effects. Upper bound and approximate-tight upper bound expressions for the bit-error rate (BER) of LQAM-MPPM techniques over EW FSO channels are obtained, taking into account the effects of fog, beam divergence, and pointing-error. Setup block diagram for both the transmitter and receiver of the LQAM-MPPM/FSO system are introduced and illustrated. The BER expressions are evaluated numerically and the results reveal that LQAM-MPPM technique outperforms ordinary LQAM and MPPM schemes under different fading levels and weather conditions. Furthermore, the effect of modulation-index is investigated and it turned out that a modulation-index greater than 0.4 is required in order to optimize the system performance. Finally, the effect of pointing-error introduces a great power penalty on the LQAM-MPPM system performance. Specifically, at a BER of 10-9, pointing-error introduces power penalties of about 45 and 28 dB for receiver aperture sizes of DR = 50 and 200 mm, respectively.
Ownsworth, Tamara; Fleming, Jennifer; Tate, Robyn; Shum, David H K; Griffin, Janelle; Schmidt, Julia; Lane-Brown, Amanda; Kendall, Melissa; Chevignard, Mathilde
2013-11-05
Poor skills generalization poses a major barrier to successful outcomes of rehabilitation after traumatic brain injury (TBI). Error-based learning (EBL) is a relatively new intervention approach that aims to promote skills generalization by teaching people internal self-regulation skills, or how to anticipate, monitor and correct their own errors. This paper describes the protocol of a study that aims to compare the efficacy of EBL and errorless learning (ELL) for improving error self-regulation, behavioral competency, awareness of deficits and long-term outcomes after TBI. This randomized, controlled trial (RCT) has two arms (EBL and ELL); each arm entails 8 × 2 h training sessions conducted within the participants' homes. The first four sessions involve a meal preparation activity, and the final four sessions incorporate a multitasking errand activity. Based on a sample size estimate, 135 participants with severe TBI will be randomized into either the EBL or ELL condition. The primary outcome measure assesses error self-regulation skills on a task related to but distinct from training. Secondary outcomes include measures of self-monitoring and self-regulation, behavioral competency, awareness of deficits, role participation and supportive care needs. Assessments will be conducted at pre-intervention, post-intervention, and at 6-months post-intervention. This study seeks to determine the efficacy and long-term impact of EBL for training internal self-regulation strategies following severe TBI. In doing so, the study will advance theoretical understanding of the role of errors in task learning and skills generalization. EBL has the potential to reduce the length and costs of rehabilitation and lifestyle support because the techniques could enhance generalization success and lifelong application of strategies after TBI. ACTRN12613000585729.
Modeling and Control for Microgrids
NASA Astrophysics Data System (ADS)
Steenis, Joel
Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain scheduled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.
ERIC Educational Resources Information Center
Wilson, Mark
This study investigates the accuracy of the Woodruff-Causey technique for estimating sampling errors for complex statistics. The technique may be applied when data are collected by using multistage clustered samples. The technique was chosen for study because of its relevance to the correct use of multivariate analyses in educational survey…
Utilization of a CRT display light pen in the design of feedback control systems
NASA Technical Reports Server (NTRS)
Thompson, J. G.; Young, K. R.
1972-01-01
A hierarchical structure of the interlinked programs was developed to provide a flexible computer-aided design tool. A graphical input technique and a data structure are considered which provide the capability of entering the control system model description into the computer in block diagram form. An information storage and retrieval system was developed to keep track of the system description, and analysis and simulation results, and to provide them to the correct routines for further manipulation or display. Error analysis and diagnostic capabilities are discussed, and a technique was developed to reduce a transfer function to a set of nested integrals suitable for digital simulation. A general, automated block diagram reduction procedure was set up to prepare the system description for the analysis routines.
Emerging Techniques for Field Device Security
Schwartz, Moses; Bechtel Corp.; Mulder, John; ...
2014-11-01
Critical infrastructure, such as electrical power plants and oil refineries, rely on embedded devices to control essential processes. State of the art security is unable to detect attacks on these devices at the hardware or firmware level. We provide an overview of the hardware used in industrial control system field devices, look at how these devices have been attacked, and discuss techniques and new technologies that may be used to secure them. We follow three themes: (1) Inspectability, the capability for an external arbiter to monitor the internal state of a device. (2) Trustworthiness, the degree to which a systemmore » will continue to function correctly despite disruption, error, or attack. (3) Diversity, the use of adaptive systems and complexity to make attacks more difficult by reducing the feasible attack surface.« less
Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R
2016-03-01
This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.
Sliding Mode Control (SMC) of Robot Manipulator via Intelligent Controllers
NASA Astrophysics Data System (ADS)
Kapoor, Neha; Ohri, Jyoti
2017-02-01
Inspite of so much research, key technical problem, naming chattering of conventional, simple and robust SMC is still a challenge to the researchers and hence limits its practical application. However, newly developed soft computing based techniques can provide solution. In order to have advantages of conventional and heuristic soft computing based control techniques, in this paper various commonly used intelligent techniques, neural network, fuzzy logic and adaptive neuro fuzzy inference system (ANFIS) have been combined with sliding mode controller (SMC). For validation, proposed hybrid control schemes have been implemented for tracking a predefined trajectory by robotic manipulator, incorporating structured and unstructured uncertainties in the system. After reviewing numerous papers, all the commonly occurring uncertainties like continuous disturbance, uniform random white noise, static friction like coulomb friction and viscous friction, dynamic friction like Dhal friction and LuGre friction have been inserted in the system. Various performance indices like norm of tracking error, chattering in control input, norm of input torque, disturbance rejection, chattering rejection have been used. Comparative results show that with almost eliminated chattering the intelligent SMC controllers are found to be more efficient over simple SMC. It has also been observed from results that ANFIS based controller has the best tracking performance with the reduced burden on the system. No paper in the literature has found to have all these structured and unstructured uncertainties together for motion control of robotic manipulator.
A functional approach to movement analysis and error identification in sports and physical education
Hossner, Ernst-Joachim; Schiebl, Frank; Göhner, Ulrich
2015-01-01
In a hypothesis-and-theory paper, a functional approach to movement analysis in sports is introduced. In this approach, contrary to classical concepts, it is not anymore the “ideal” movement of elite athletes that is taken as a template for the movements produced by learners. Instead, movements are understood as the means to solve given tasks that in turn, are defined by to-be-achieved task goals. A functional analysis comprises the steps of (1) recognizing constraints that define the functional structure, (2) identifying sub-actions that subserve the achievement of structure-dependent goals, (3) explicating modalities as specifics of the movement execution, and (4) assigning functions to actions, sub-actions and modalities. Regarding motor-control theory, a functional approach can be linked to a dynamical-system framework of behavioral shaping, to cognitive models of modular effect-related motor control as well as to explicit concepts of goal setting and goal achievement. Finally, it is shown that a functional approach is of particular help for sports practice in the context of structuring part practice, recognizing functionally equivalent task solutions, finding innovative technique alternatives, distinguishing errors from style, and identifying root causes of movement errors. PMID:26441717
Sample size determination in combinatorial chemistry.
Zhao, P L; Zambias, R; Bolognese, J A; Boulton, D; Chapman, K
1995-01-01
Combinatorial chemistry is gaining wide appeal as a technique for generating molecular diversity. Among the many combinatorial protocols, the split/recombine method is quite popular and particularly efficient at generating large libraries of compounds. In this process, polymer beads are equally divided into a series of pools and each pool is treated with a unique fragment; then the beads are recombined, mixed to uniformity, and redivided equally into a new series of pools for the subsequent couplings. The deviation from the ideal equimolar distribution of the final products is assessed by a special overall relative error, which is shown to be related to the Pearson statistic. Although the split/recombine sampling scheme is quite different from those used in analysis of categorical data, the Pearson statistic is shown to still follow a chi2 distribution. This result allows us to derive the required number of beads such that, with 99% confidence, the overall relative error is controlled to be less than a pregiven tolerable limit L1. In this paper, we also discuss another criterion, which determines the required number of beads so that, with 99% confidence, all individual relative errors are controlled to be less than a pregiven tolerable limit L2 (0 < L2 < 1). PMID:11607586
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
Statistical analysis of AFE GN&C aeropass performance
NASA Technical Reports Server (NTRS)
Chang, Ho-Pen; French, Raymond A.
1990-01-01
Performance of the guidance, navigation, and control (GN&C) system used on the Aeroassist Flight Experiment (AFE) spacecraft has been studied with Monte Carlo techniques. The performance of the AFE GN&C is investigated with a 6-DOF numerical dynamic model which includes a Global Reference Atmospheric Model (GRAM) and a gravitational model with oblateness corrections. The study considers all the uncertainties due to the environment and the system itself. In the AFE's aeropass phase, perturbations on the system performance are caused by an error space which has over 20 dimensions of the correlated/uncorrelated error sources. The goal of this study is to determine, in a statistical sense, how much flight path angle error can be tolerated at entry interface (EI) and still have acceptable delta-V capability at exit to position the AFE spacecraft for recovery. Assuming there is fuel available to produce 380 ft/sec of delta-V at atmospheric exit, a 3-sigma standard deviation in flight path angle error of 0.04 degrees at EI would result in a 98-percent probability of mission success.
Round-off error in long-term orbital integrations using multistep methods
NASA Technical Reports Server (NTRS)
Quinlan, Gerald D.
1994-01-01
Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.
Modulation/demodulation techniques for satellite communications. Part 1: Background
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1981-01-01
Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
De Rosario, Helios; Page, Álvaro; Besa, Antonio
2017-09-06
The accurate location of the main axes of rotation (AoR) is a crucial step in many applications of human movement analysis. There are different formal methods to determine the direction and position of the AoR, whose performance varies across studies, depending on the pose and the source of errors. Most methods are based on minimizing squared differences between observed and modelled marker positions or rigid motion parameters, implicitly assuming independent and uncorrelated errors, but the largest error usually results from soft tissue artefacts (STA), which do not have such statistical properties and are not effectively cancelled out by such methods. However, with adequate methods it is possible to assume that STA only account for a small fraction of the observed motion and to obtain explicit formulas through differential analysis that relate STA components to the resulting errors in AoR parameters. In this paper such formulas are derived for three different functional calibration techniques (Geometric Fitting, mean Finite Helical Axis, and SARA), to explain why each technique behaves differently from the others, and to propose strategies to compensate for those errors. These techniques were tested with published data from a sit-to-stand activity, where the true axis was defined using bi-planar fluoroscopy. All the methods were able to estimate the direction of the AoR with an error of less than 5°, whereas there were errors in the location of the axis of 30-40mm. Such location errors could be reduced to less than 17mm by the methods based on equations that use rigid motion parameters (mean Finite Helical Axis, SARA) when the translation component was calculated using the three markers nearest to the axis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
Threat and error management for anesthesiologists: a predictive risk taxonomy
Ruskin, Keith J.; Stiegler, Marjorie P.; Park, Kellie; Guffey, Patrick; Kurup, Viji; Chidester, Thomas
2015-01-01
Purpose of review Patient care in the operating room is a dynamic interaction that requires cooperation among team members and reliance upon sophisticated technology. Most human factors research in medicine has been focused on analyzing errors and implementing system-wide changes to prevent them from recurring. We describe a set of techniques that has been used successfully by the aviation industry to analyze errors and adverse events and explain how these techniques can be applied to patient care. Recent findings Threat and error management (TEM) describes adverse events in terms of risks or challenges that are present in an operational environment (threats) and the actions of specific personnel that potentiate or exacerbate those threats (errors). TEM is a technique widely used in aviation, and can be adapted for the use in a medical setting to predict high-risk situations and prevent errors in the perioperative period. A threat taxonomy is a novel way of classifying and predicting the hazards that can occur in the operating room. TEM can be used to identify error-producing situations, analyze adverse events, and design training scenarios. Summary TEM offers a multifaceted strategy for identifying hazards, reducing errors, and training physicians. A threat taxonomy may improve analysis of critical events with subsequent development of specific interventions, and may also serve as a framework for training programs in risk mitigation. PMID:24113268
Metric Selection for Evaluation of Human Supervisory Control Systems
2009-12-01
finding a significant effect when there is none becomes more likely. The inflation of type I error due to multiple dependent variables can be handled...with multivariate analysis techniques, such as Multivariate Analysis of Variance (MANOVA) (Johnson & Wichern, 2002). However, it should be noted that...the few significant differences among many insignificant ones. The best way to avoid failure to identify significant differences is to design an
MQCC: Maximum Queue Congestion Control for Multipath Networks with Blockage
2015-10-19
higher error rates in wireless networks result in a great deal of “false” congestion indications, resulting in underutilization of the network [4...approaches that are relevant to lossy wireless networks . Multipath TCP (MPTCP) schemes [9], [10] explore the design and implementation of multipath...attempts to “fix” TCP to work with lossy wireless networks using existing techniques. The authors have taken the view that because packet losses are
Aeronautical Decision Making for Student and Private Pilots.
1987-05-01
you learn to gain voluntary control over your body to achieve the relaxation response. In autogenic training , you learn to shut down many bodily...Ahstruct "Aviation accident data indicate that the majority of aircraft mishaps are due to judgment error. This training manual is part of a project to...develop materials and techniques to help improve pilot decision making. Training programs using prototype versions of these materials have
Nonlinear truncation error analysis of finite difference schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Klopfer, G. H.; Mcrae, D. S.
1983-01-01
It is pointed out that, in general, dissipative finite difference integration schemes have been found to be quite robust when applied to the Euler equations of gas dynamics. The present investigation considers a modified equation analysis of both implicit and explicit finite difference techniques as applied to the Euler equations. The analysis is used to identify those error terms which contribute most to the observed solution errors. A technique for analytically removing the dominant error terms is demonstrated, resulting in a greatly improved solution for the explicit Lax-Wendroff schemes. It is shown that the nonlinear truncation errors are quite large and distributed quite differently for each of the three conservation equations as applied to a one-dimensional shock tube problem.
On parameters identification of computational models of vibrations during quiet standing of humans
NASA Astrophysics Data System (ADS)
Barauskas, R.; Krušinskienė, R.
2007-12-01
Vibration of the center of pressure (COP) of human body on the base of support during quiet standing is a very popular clinical research, which provides useful information about the physical and health condition of an individual. In this work, vibrations of COP of a human body in forward-backward direction during still standing are generated using controlled inverted pendulum (CIP) model with a single degree of freedom (dof) supplied with proportional, integral and differential (PID) controller, which represents the behavior of the central neural system of a human body and excited by cumulative disturbance vibration, generated within the body due to breathing or any other physical condition. The identification of the model and disturbance parameters is an important stage while creating a close-to-reality computational model able to evaluate features of disturbance. The aim of this study is to present the CIP model parameters identification approach based on the information captured by time series of the COP signal. The identification procedure is based on an error function minimization. Error function is formulated in terms of time laws of computed and experimentally measured COP vibrations. As an alternative, error function is formulated in terms of the stabilogram diffusion function (SDF). The minimization of error functions is carried out by employing methods based on sensitivity functions of the error with respect to model and excitation parameters. The sensitivity functions are obtained by using the variational techniques. The inverse dynamic problem approach has been employed in order to establish the properties of the disturbance time laws ensuring the satisfactory coincidence of measured and computed COP vibration laws. The main difficulty of the investigated problem is encountered during the model validation stage. Generally, neither the PID controller parameter set nor the disturbance time law are known in advance. In this work, an error function formulated in terms of time derivative of disturbance torque has been proposed in order to obtain PID controller parameters, as well as the reference time law of the disturbance. The disturbance torque is calculated from experimental data using the inverse dynamic approach. Experiments presented in this study revealed that vibrations of disturbance torque and PID controller parameters identified by the method may be qualified as feasible in humans. Presented approach may be easily extended to structural models with any number of dof or higher structural complexity.
An improved switching converter model. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Shortt, D. J.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.
Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations
NASA Technical Reports Server (NTRS)
Perigaud, Claire; Zlotnicki, Victor
1992-01-01
To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.
Song, Zhankui; Sun, Kaibiao
2014-01-01
A novel adaptive backstepping sliding mode control (ABSMC) law with fuzzy monitoring strategy is proposed for the tracking-control of a kind of nonlinear mechanical system. The proposed ABSMC scheme combining the sliding mode control and backstepping technique ensure that the occurrence of the sliding motion in finite-time and the trajectory of tracking-error converge to equilibrium point. To obtain a better perturbation rejection property, an adaptive control law is employed to compensate the lumped perturbation. Furthermore, we introduce fuzzy monitoring strategy to improve adaptive capacity and soften the control signal. The convergence and stability of the proposed control scheme are proved by using Lyaponov's method. Finally, numerical simulations demonstrate the effectiveness of the proposed control scheme. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Model and experiments to optimize co-adaptation in a simplified myoelectric control system.
Couraud, M; Cattaert, D; Paclet, F; Oudeyer, P Y; de Rugy, A
2018-04-01
To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.
Real-Time Stability and Control Derivative Extraction From F-15 Flight Data
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Moes, Timothy R.; Morelli, Eugene A.
2003-01-01
A real-time, frequency-domain, equation-error parameter identification (PID) technique was used to estimate stability and control derivatives from flight data. This technique is being studied to support adaptive control system concepts currently being developed by NASA (National Aeronautics and Space Administration), academia, and industry. This report describes the basic real-time algorithm used for this study and implementation issues for onboard usage as part of an indirect-adaptive control system. A confidence measures system for automated evaluation of PID results is discussed. Results calculated using flight data from a modified F-15 aircraft are presented. Test maneuvers included pilot input doublets and automated inputs at several flight conditions. Estimated derivatives are compared to aerodynamic model predictions. Data indicate that the real-time PID used for this study performs well enough to be used for onboard parameter estimation. For suitable test inputs, the parameter estimates converged rapidly to sufficient levels of accuracy. The devised confidence measures used were moderately successful.
Hou, Yi-You
2017-09-01
This article addresses an evolutionary programming (EP) algorithm technique-based and proportional-integral-derivative (PID) control methods are established to guarantee synchronization of the master and slave Rikitake chaotic systems. For PID synchronous control, the evolutionary programming (EP) algorithm is used to find the optimal PID controller parameters k p , k i , k d by integrated absolute error (IAE) method for the convergence conditions. In order to verify the system performance, the basic electronic components containing operational amplifiers (OPAs), resistors, and capacitors are used to implement the proposed chaotic Rikitake systems. Finally, the experimental results validate the proposed Rikitake chaotic synchronization approach. Copyright © 2017. Published by Elsevier Ltd.
Correcting for deformation in skin-based marker systems.
Alexander, E J; Andriacchi, T P
2001-03-01
A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory
2017-04-01
The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.
Improving Word Learning in Children Using an Errorless Technique
ERIC Educational Resources Information Center
Warmington, Meesha; Hitch, Graham J.; Gathercole, Susan E.
2013-01-01
The current experiment examined the relative advantage of an errorless learning technique over an errorful one in the acquisition of novel names for unfamiliar objects in typically developing children aged between 7 and 9 years. Errorless learning led to significantly better learning than did errorful learning. Processing speed and vocabulary…
Digital identification of cartographic control points
NASA Technical Reports Server (NTRS)
Gaskell, R. W.
1988-01-01
Techniques have been developed for the sub-pixel location of control points in satellite images returned by the Voyager spacecraft. The procedure uses digital imaging data in the neighborhood of the point to form a multipicture model of a piece of the surface. Comparison of this model with the digital image in each picture determines the control point locations to about a tenth of a pixel. At this level of precision, previously insignificant effects must be considered, including chromatic aberration, high level imaging distortions, and systematic errors due to navigation uncertainties. Use of these methods in the study of Jupiter's satellite Io has proven very fruitful.
DeLacy, Brendan G; Bandy, Alan R
2008-01-01
An atmospheric pressure ionization mass spectrometry/isotopically labeled standard (APIMS/ILS) method has been developed for the determination of carbon dioxide (CO(2)) concentration. Descriptions of the instrumental components, the ionization chemistry, and the statistics associated with the analytical method are provided. This method represents an alternative to the nondispersive infrared (NDIR) technique, which is currently used in the atmospheric community to determine atmospheric CO(2) concentrations. The APIMS/ILS and NDIR methods exhibit a decreased sensitivity for CO(2) in the presence of water vapor. Therefore, dryers such as a nafion dryer are used to remove water before detection. The APIMS/ILS method measures mixing ratios and demonstrates linearity and range in the presence or absence of a dryer. The NDIR technique, on the other hand, measures molar concentrations. The second half of this paper describes errors in molar concentration measurements that are caused by drying. An equation describing the errors was derived from the ideal gas law, the conservation of mass, and Dalton's Law. The purpose of this derivation was to quantify errors in the NDIR technique that are caused by drying. Laboratory experiments were conducted to verify the errors created solely by the dryer in CO(2) concentration measurements post-dryer. The laboratory experiments verified the theoretically predicted errors in the derived equations. There are numerous references in the literature that describe the use of a dryer in conjunction with the NDIR technique. However, these references do not address the errors that are caused by drying.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
The calculation of average error probability in a digital fibre optical communication system
NASA Astrophysics Data System (ADS)
Rugemalira, R. A. M.
1980-03-01
This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity
End-to-end Coronagraphic Modeling Including a Low-order Wavefront Sensor
NASA Technical Reports Server (NTRS)
Krist, John E.; Trauger, John T.; Unwin, Stephen C.; Traub, Wesley A.
2012-01-01
To evaluate space-based coronagraphic techniques, end-to-end modeling is necessary to simulate realistic fields containing speckles caused by wavefront errors. Real systems will suffer from pointing errors and thermal and motioninduced mechanical stresses that introduce time-variable wavefront aberrations that can reduce the field contrast. A loworder wavefront sensor (LOWFS) is needed to measure these changes at a sufficiently high rate to maintain the contrast level during observations. We implement here a LOWFS and corresponding low-order wavefront control subsystem (LOWFCS) in end-to-end models of a space-based coronagraph. Our goal is to be able to accurately duplicate the effect of the LOWFS+LOWFCS without explicitly evaluating the end-to-end model at numerous time steps.
Generation of dark hollow beam via coherent combination based on adaptive optics.
Zheng, Yi; Wang, Xiaohua; Shen, Feng; Li, Xinyang
2010-12-20
A novel method for generating a dark hollow beam (DHB) is proposed and studied both theoretically and experimentally. A coherent combination technique for laser arrays is implemented based on adaptive optics (AO). A beam arraying structure and an active segmented mirror are designed and described. Piston errors are extracted by a zero-order interference detection system with the help of a custom-made photo-detectors array. An algorithm called the extremum approach is adopted to calculate feedback control signals. A dynamic piston error is imported by LiNbO3 to test the capability of the AO servo. In a closed loop the stable and clear DHB is obtained. The experimental results confirm the feasibility of the concept.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
C-band radar pulse Doppler error: Its discovery, modeling, and elimination
NASA Technical Reports Server (NTRS)
Krabill, W. B.; Dempsey, D. J.
1978-01-01
The discovery of a C Band radar pulse Doppler error is discussed and use of the GEOS 3 satellite's coherent transponder to isolate the error source is described. An analysis of the pulse Doppler tracking loop is presented and a mathematical model for the error was developed. Error correction techniques were developed and are described including implementation details.
Human Error and the International Space Station: Challenges and Triumphs in Science Operations
NASA Technical Reports Server (NTRS)
Harris, Samantha S.; Simpson, Beau C.
2016-01-01
Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.
Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback
Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching
2017-01-01
Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658
Multiple Cognitive Control Effects of Error Likelihood and Conflict
Brown, Joshua W.
2010-01-01
Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873
Functional Based Adaptive and Fuzzy Sliding Controller for Non-Autonomous Active Suspension System
NASA Astrophysics Data System (ADS)
Huang, Shiuh-Jer; Chen, Hung-Yi
In this paper, an adaptive sliding controller is developed for controlling a vehicle active suspension system. The functional approximation technique is employed to substitute the unknown non-autonomous functions of the suspension system and release the model-based requirement of sliding mode control algorithm. In order to improve the control performance and reduce the implementation problem, a fuzzy strategy with online learning ability is added to compensate the functional approximation error. The update laws of the functional approximation coefficients and the fuzzy tuning parameters are derived from the Lyapunov theorem to guarantee the system stability. The proposed controller is implemented on a quarter-car hydraulic actuating active suspension system test-rig. The experimental results show that the proposed controller suppresses the oscillation amplitude of the suspension system effectively.
Wang, Ding; Liu, Derong; Zhang, Yun; Li, Hongyi
2018-01-01
In this paper, we aim to tackle the neural robust tracking control problem for a class of nonlinear systems using the adaptive critic technique. The main contribution is that a neural-network-based robust tracking control scheme is established for nonlinear systems involving matched uncertainties. The augmented system considering the tracking error and the reference trajectory is formulated and then addressed under adaptive critic optimal control formulation, where the initial stabilizing controller is not needed. The approximate control law is derived via solving the Hamilton-Jacobi-Bellman equation related to the nominal augmented system, followed by closed-loop stability analysis. The robust tracking control performance is guaranteed theoretically via Lyapunov approach and also verified through simulation illustration. Copyright © 2017 Elsevier Ltd. All rights reserved.
SU-C-BRD-03: Analysis of Accelerator Generated Text Logs for Preemptive Maintenance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, CM; Baydush, AH; Nguyen, C
2014-06-15
Purpose: To develop a model to analyze medical accelerator generated parameter and performance data that will provide an early warning of performance degradation and impending component failure. Methods: A robust 6 MV VMAT quality assurance treatment delivery was used to test the constancy of accelerator performance. The generated text log files were decoded and analyzed using statistical process control (SPC) methodology. The text file data is a single snapshot of energy specific and overall systems parameters. A total of 36 system parameters were monitored which include RF generation, electron gun control, energy control, beam uniformity control, DC voltage generation, andmore » cooling systems. The parameters were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and the parameter/system specification. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: the value of 1 standard deviation from the mean operating parameter of 483 TB systems, a small fraction (≤ 5%) of the operating range, or a fraction of the minor fault deviation. Results: There were 34 parameters in which synthetic errors were introduced. There were 2 parameters (radial position steering coil, and positive 24V DC) in which the errors did not exceed the limit of the I/MR chart. The I chart limit was exceeded for all of the remaining parameters (94.2%). The MR chart limit was exceeded in 29 of the 32 parameters (85.3%) in which the I chart limit was exceeded. Conclusion: Statistical process control I/MR evaluation of text log file parameters may be effective in providing an early warning of performance degradation or component failure for digital medical accelerator systems. Research is Supported by Varian Medical Systems, Inc.« less
A new methodology for vibration error compensation of optical encoders.
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.
Attitude control with realization of linear error dynamics
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1993-01-01
An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.
Hybrid dynamic radioactive particle tracking (RPT) calibration technique for multiphase flow systems
NASA Astrophysics Data System (ADS)
Khane, Vaibhav; Al-Dahhan, Muthanna H.
2017-04-01
The radioactive particle tracking (RPT) technique has been utilized to measure three-dimensional hydrodynamic parameters for multiphase flow systems. An analytical solution to the inverse problem of the RPT technique, i.e. finding the instantaneous tracer positions based upon instantaneous counts received in the detectors, is not possible. Therefore, a calibration to obtain a counts-distance map is needed. There are major shortcomings in the conventional RPT calibration method due to which it has limited applicability in practical applications. In this work, the design and development of a novel dynamic RPT calibration technique are carried out to overcome the shortcomings of the conventional RPT calibration method. The dynamic RPT calibration technique has been implemented around a test reactor with 1foot in diameter and 1 foot in height using Cobalt-60 as an isotopes tracer particle. Two sets of experiments have been carried out to test the capability of novel dynamic RPT calibration. In the first set of experiments, a manual calibration apparatus has been used to hold a tracer particle at known static locations. In the second set of experiments, the tracer particle was moved vertically downwards along a straight line path in a controlled manner. The obtained reconstruction results about the tracer particle position were compared with the actual known position and the reconstruction errors were estimated. The obtained results revealed that the dynamic RPT calibration technique is capable of identifying tracer particle positions with a reconstruction error between 1 to 5.9 mm for the conditions studied which could be improved depending on various factors outlined here.
NASA Technical Reports Server (NTRS)
Stone, H. W.; Powell, R. W.
1985-01-01
A six degree of freedom simulation analysis was performed for the space shuttle orbiter during entry from Mach 8 to Mach 1.5 with realistic off nominal conditions by using the flight control systems defined by the shuttle contractor. The off nominal conditions included aerodynamic uncertainties in extrapolating from wind tunnel derived characteristics to full scale flight characteristics, uncertainties in the estimates of the reaction control system interaction with the orbiter aerodynamics, an error in deriving the angle of attack from onboard instrumentation, the failure of two of the four reaction control system thrusters on each side, and a lateral center of gravity offset coupled with vehicle and flow asymmetries. With combinations of these off nominal conditions, the flight control system performed satisfactorily. At low hypersonic speeds, a few cases exhibited unacceptable performances when errors in deriving the angle of attack from the onboard instrumentation were modeled. The orbiter was unable to maintain lateral trim for some cases between Mach 5 and Mach 2 and exhibited limit cycle tendencies or residual roll oscillations between Mach 3 and Mach 1. Piloting techniques and changes in some gains and switching times in the flight control system are suggested to help alleviate these problems.
A study of pilot modeling in multi-controller tasks
NASA Technical Reports Server (NTRS)
Whitbeck, R. F.; Knight, J. R.
1972-01-01
A modeling approach, which utilizes a matrix of transfer functions to describe the human pilot in multiple input, multiple output control situations, is studied. The approach used was to extend a well established scalar Wiener-Hopf minimization technique to the matrix case and then study, via a series of experiments, the data requirements when only finite record lengths are available. One of these experiments was a two-controller roll tracking experiment designed to force the pilot to use rudder in order to coordinate and reduce the effects of aileron yaw. One model was computed for the case where the signals used to generate the spectral matrix are error and bank angle while another model was computed for the case where error and yaw angle are the inputs. Several anomalies were observed to be present in the experimental data. These are defined by the descriptive terms roll up, break up, and roll down. Due to these algorithm induced anomalies, the frequency band over which reliable estimates of power spectra can be achieved is considerably less than predicted by the sampling theorem.
Three-dimensional cinematography with control object of unknown shape.
Dapena, J; Harman, E A; Miller, J A
1982-01-01
A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.
Assessment of the Derivative-Moment Transformation method for unsteady-load estimation
NASA Astrophysics Data System (ADS)
Mohebbian, Ali; Rival, David
2011-11-01
It is often difficult, if not impossible, to measure the aerodynamic or hydrodynamic forces on a moving body. For this reason, a classical control-volume technique is typically applied to extract the unsteady forces instead. However, measuring the acceleration term within the volume of interest using PIV can be limited by optical access, reflections as well as shadows. Therefore in this study an alternative approach, termed the Derivative-Moment Transformation (DMT) method, is introduced and tested on a synthetic data set produced using numerical simulations. The test case involves the unsteady loading of a flat plate in a two-dimensional, laminar periodic gust. The results suggest that the DMT method can accurately predict the acceleration term so long as appropriate spatial and temporal resolutions are maintained. The major deficiency was found to be the determination of pressure in the wake. The effect of control-volume size was investigated suggesting that smaller domains work best by minimizing the associated error with the pressure field. When increasing the control-volume size, the number of calculations necessary for the pressure-gradient integration increases, in turn substantially increasing the error propagation.
NASA Astrophysics Data System (ADS)
Ben Regaya, Chiheb; Farhani, Fethi; Zaafouri, Abderrahmen; Chaari, Abdelkader
2018-02-01
This paper presents a new adaptive Backstepping technique to handle the induction motor (IM) rotor resistance tracking problem. The proposed solution leads to improve the robustness of the control system. Given the presence of static error when estimating the rotor resistance with classical methods, and the sensitivity to the load torque variation at low speed, a new Backstepping observer enhanced with an integral action of the tracking errors is presented, which can be established in two steps. The first one consists to estimate the rotor flux using a Backstepping observer. The second step, defines the adaptation mechanism of the rotor resistance based on the estimated rotor-flux. The asymptotic stability of the observer is proven by Lyapunov theory. To validate the proposed solution, a simulation and experimental benchmarking of a 3 kW induction motor are presented and analyzed. The obtained results show the effectiveness of the proposed solution compared to the model reference adaptive system (MRAS) rotor resistance observer presented in other recent works.
Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard
2004-01-01
Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.
Basaki, Kinga; Alkumru, Hasan; De Souza, Grace; Finer, Yoav
To assess the three-dimensional (3D) accuracy and clinical acceptability of implant definitive casts fabricated using a digital impression approach and to compare the results with those of a conventional impression method in a partially edentulous condition. A mandibular reference model was fabricated with implants in the first premolar and molar positions to simulate a patient with bilateral posterior edentulism. Ten implant-level impressions per method were made using either an intraoral scanner with scanning abutments for the digital approach or an open-tray technique and polyvinylsiloxane material for the conventional approach. 3D analysis and comparison of implant location on resultant definitive casts were performed using laser scanner and quality control software. The inter-implant distances and interimplant angulations for each implant pair were measured for the reference model and for each definitive cast (n = 20 per group); these measurements were compared to calculate the magnitude of error in 3D for each definitive cast. The influence of implant angulation on definitive cast accuracy was evaluated for both digital and conventional approaches. Statistical analysis was performed using t test (α = .05) for implant position and angulation. Clinical qualitative assessment of accuracy was done via the assessment of the passivity of a master verification stent for each implant pair, and significance was analyzed using chi-square test (α = .05). A 3D error of implant positioning was observed for the two impression techniques vs the reference model, with mean ± standard deviation (SD) error of 116 ± 94 μm and 56 ± 29 μm for the digital and conventional approaches, respectively (P = .01). In contrast, the inter-implant angulation errors were not significantly different between the two techniques (P = .83). Implant angulation did not have a significant influence on definitive cast accuracy within either technique (P = .64). The verification stent demonstrated acceptable passive fit for 11 out of 20 casts and 18 out of 20 casts for the digital and conventional methods, respectively (P = .01). Definitive casts fabricated using the digital impression approach were less accurate than those fabricated from the conventional impression approach for this simulated clinical scenario. A significant number of definitive casts generated by the digital technique did not meet clinically acceptable accuracy for the fabrication of a multiple implant-supported restoration.
Impact of Standardized Communication Techniques on Errors during Simulated Neonatal Resuscitation.
Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P
2016-03-01
Current patterns of communication in high-risk clinical situations, such as resuscitation, are imprecise and prone to error. We hypothesized that the use of standardized communication techniques would decrease the errors committed by resuscitation teams during neonatal resuscitation. In a prospective, single-blinded, matched pairs design with block randomization, 13 subjects performed as a lead resuscitator in two simulated complex neonatal resuscitations. Two nurses assisted each subject during the simulated resuscitation scenarios. In one scenario, the nurses used nonstandard communication; in the other, they used standardized communication techniques. The performance of the subjects was scored to determine errors committed (defined relative to the Neonatal Resuscitation Program algorithm), time to initiation of positive pressure ventilation (PPV), and time to initiation of chest compressions (CC). In scenarios in which subjects were exposed to standardized communication techniques, there was a trend toward decreased error rate, time to initiation of PPV, and time to initiation of CC. While not statistically significant, there was a 1.7-second improvement in time to initiation of PPV and a 7.9-second improvement in time to initiation of CC. Should these improvements in human performance be replicated in the care of real newborn infants, they could improve patient outcomes and enhance patient safety. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Numerical modeling of the divided bar measurements
NASA Astrophysics Data System (ADS)
LEE, Y.; Keehm, Y.
2011-12-01
The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.
A Sensitivity Analysis of Circular Error Probable Approximation Techniques
1992-03-01
SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some
Feedback controlled optics with wavefront compensation
NASA Technical Reports Server (NTRS)
Breckenridge, William G. (Inventor); Redding, David C. (Inventor)
1993-01-01
The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.
Study of an automatic trajectory following control system
NASA Technical Reports Server (NTRS)
Vanlandingham, H. F.; Moose, R. L.; Zwicke, P. E.; Lucas, W. H.; Brinkley, J. D.
1983-01-01
It is shown that the estimator part of the Modified Partitioned Adaptive Controller, (MPAC) developed for nonlinear aircraft dynamics of a small jet transport can adapt to sensor failures. In addition, an investigation is made into the potential usefulness of the configuration detection technique used in the MPAC and the failure detection filter is developed that determines how a noise plant output is associated with a line or plane characteristic of a failure. It is shown by computer simulation that the estimator part and the configuration detection part of the MPAC can readily adapt to actuator and sensor failures and that the failure detection filter technique cannot detect actuator or sensor failures accurately for this type of system because of the plant modeling errors. In addition, it is shown that the decision technique, developed for the failure detection filter, can accurately determine that the plant output is related to the characteristic line or plane in the presence of sensor noise.
LOCSET Phase Locking: Operation, Diagnostics, and Applications
NASA Astrophysics Data System (ADS)
Pulford, Benjamin N.
The aim of this dissertation is to discuss the theoretical and experimental work recently done with the Locking of Optical Coherence via Single-detector Electronic-frequency Tagging (LOCSET) phase locking technique developed and employed here are AFRL. The primary objectives of this effort are to detail the fundamental operation of the LOCSET phase locking technique, recognize the conditions in which the LOCSET control electronics optimally operate, demonstrate LOCSET phase locking with higher channel counts than ever before, and extend the LOCSET technique to correct for low order, atmospherically induced, phase aberrations introduced to the output of a tiled array of coherently combinable beams. The experimental work performed for this effort resulted in the coherent combination of 32 low power optical beams operating with unprecedented LOCSET phase error performance of lambda/71 RMS in a local loop beam combination configuration. The LOCSET phase locking technique was also successfully extended, for the first time, into an Object In the Loop (OIL) configuration by utilizing light scattered off of a remote object as the optical return signal for the LOCSET phase control electronics. Said LOCSET-OIL technique is capable of correcting for low order phase aberrations caused by atmospheric turbulence disturbances applied across a tiled array output.
Context Specificity of Post-Error and Post-Conflict Cognitive Control Adjustments
Forster, Sarah E.; Cho, Raymond Y.
2014-01-01
There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control. PMID:24603900
NASA Astrophysics Data System (ADS)
Wu, Cheng; Zhen Yu, Jian
2018-03-01
Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS), Deming regression (DR), orthogonal distance regression (ODR), weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne twister (MT) pseudorandom number generator. The numerical simulations are also improved by (a) refining the parameterization of nonlinear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, and (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot) was developed to facilitate the implementation of error-in-variables regressions.
On the connection between multigrid and cyclic reduction
NASA Technical Reports Server (NTRS)
Merriam, M. L.
1984-01-01
A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.
Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.
2018-05-01
Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.
Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage
Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José
2016-01-01
Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014
Reduction in Chemotherapy Mixing Errors Using Six Sigma: Illinois CancerCare Experience.
Heard, Bridgette; Miller, Laura; Kumar, Pankaj
2012-03-01
Chemotherapy mixing errors (CTMRs), although rare, have serious consequences. Illinois CancerCare is a large practice with multiple satellite offices. The goal of this study was to reduce the number of CTMRs using Six Sigma methods. A Six Sigma team consisting of five participants (registered nurses and pharmacy technicians [PTs]) was formed. The team had 10 hours of Six Sigma training in the DMAIC (ie, Define, Measure, Analyze, Improve, Control) process. Measurement of errors started from the time the CT order was verified by the PT to the time of CT administration by the nurse. Data collection included retrospective error tracking software, system audits, and staff surveys. Root causes of CTMRs included inadequate knowledge of CT mixing protocol, inconsistencies in checking methods, and frequent changes in staffing of clinics. Initial CTMRs (n = 33,259) constituted 0.050%, with 77% of these errors affecting patients. The action plan included checklists, education, and competency testing. The postimplementation error rate (n = 33,376, annualized) over a 3-month period was reduced to 0.019%, with only 15% of errors affecting patients. Initial Sigma was calculated at 4.2; this process resulted in the improvement of Sigma to 5.2, representing a 100-fold reduction. Financial analysis demonstrated a reduction in annualized loss of revenue (administration charges and drug wastage) from $11,537.95 (Medicare Average Sales Price) before the start of the project to $1,262.40. The Six Sigma process is a powerful technique in the reduction of CTMRs.
Dwell time algorithm based on the optimization theory for magnetorheological finishing
NASA Astrophysics Data System (ADS)
Zhang, Yunfei; Wang, Yang; Wang, Yajun; He, Jianguo; Ji, Fang; Huang, Wen
2010-10-01
Magnetorheological finishing (MRF) is an advanced polishing technique capable of rapidly converging to the required surface figure. This process can deterministically control the amount of the material removed by varying a time to dwell at each particular position on the workpiece surface. The dwell time algorithm is one of the most important key techniques of the MRF. A dwell time algorithm based on the1 matrix equation and optimization theory was presented in this paper. The conventional mathematical model of the dwell time was transferred to a matrix equation containing initial surface error, removal function and dwell time function. The dwell time to be calculated was just the solution to the large, sparse matrix equation. A new mathematical model of the dwell time based on the optimization theory was established, which aims to minimize the 2-norm or ∞-norm of the residual surface error. The solution meets almost all the requirements of precise computer numerical control (CNC) without any need for extra data processing, because this optimization model has taken some polishing condition as the constraints. Practical approaches to finding a minimal least-squares solution and a minimal maximum solution are also discussed in this paper. Simulations have shown that the proposed algorithm is numerically robust and reliable. With this algorithm an experiment has been performed on the MRF machine developed by ourselves. After 4.7 minutes' polishing, the figure error of a flat workpiece with a 50 mm diameter is improved by PV from 0.191λ(λ = 632.8 nm) to 0.087λ and RMS 0.041λ to 0.010λ. This algorithm can be constructed to polish workpieces of all shapes including flats, spheres, aspheres, and prisms, and it is capable of improving the polishing figures dramatically.
Usefulness of Guided Breathing for Dose Rate-Regulated Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han-Oh, Sarah; Department of Radiation Oncology, University of Maryland Medical System, Baltimore, MD; Yi, Byong Yong
2009-02-01
Purpose: To evaluate the usefulness of guided breathing for dose rate-regulated tracking (DRRT), a new technique to compensate for intrafraction tumor motion. Methods and Materials: DRRT uses a preprogrammed multileaf collimator sequence that tracks the tumor motion derived from four-dimensional computed tomography and the corresponding breathing signals measured before treatment. Because the multileaf collimator speed can be controlled by adjusting the dose rate, the multileaf collimator positions are adjusted in real time during treatment by dose rate regulation, thereby maintaining synchrony with the tumor motion. DRRT treatment was simulated with free, audio-guided, and audiovisual-guided breathing signals acquired from 23 lungmore » cancer patients. The tracking error and duty cycle for each patient were determined as a function of the system time delay (range, 0-1.0 s). Results: The tracking error and duty cycle averaged for all 23 patients was 1.9 {+-} 0.8 mm and 92% {+-} 5%, 1.9 {+-} 1.0 mm and 93% {+-} 6%, and 1.8 {+-} 0.7 mm and 92% {+-} 6% for the free, audio-guided, and audiovisual-guided breathing, respectively, for a time delay of 0.35 s. The small differences in both the tracking error and the duty cycle with guided breathing were not statistically significant. Conclusion: DRRT by its nature adapts well to variations in breathing frequency, which is also the motivation for guided-breathing techniques. Because of this redundancy, guided breathing does not result in significant improvements for either the tracking error or the duty cycle when DRRT is used for real-time tumor tracking.« less
NASA Astrophysics Data System (ADS)
Rao, Xiong; Tang, Yunwei
2014-11-01
Land surface deformation evidently exists in a newly-built high-speed railway in the southeast of China. In this study, we utilize the Small BAseline Subsets (SBAS)-Differential Synthetic Aperture Radar Interferometry (DInSAR) technique to detect land surface deformation along the railway. In this work, 40 Cosmo-SkyMed satellite images were selected to analyze the spatial distribution and velocity of the deformation in study area. 88 pairs of image with high coherence were firstly chosen with an appropriate threshold. These images were used to deduce the deformation velocity map and the variation in time series. This result can provide information for orbit correctness and ground control point (GCP) selection in the following steps. Then, more pairs of image were selected to tighten the constraint in time dimension, and to improve the final result by decreasing the phase unwrapping error. 171 combinations of SAR pairs were ultimately selected. Reliable GCPs were re-selected according to the previously derived deformation velocity map. Orbital residuals error was rectified using these GCPs, and nonlinear deformation components were estimated. Therefore, a more accurate surface deformation velocity map was produced. Precise geodetic leveling work was implemented in the meantime. We compared the leveling result with the geocoding SBAS product using the nearest neighbour method. The mean error and standard deviation of the error were respectively 0.82 mm and 4.17 mm. This result demonstrates the effectiveness of DInSAR technique for monitoring land surface deformation, which can serve as a reliable decision for supporting highspeed railway project design, construction, operation and maintenance.
Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B
2018-08-01
Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Technique for calibrating angular measurement devices when calibration standards are unavailable
NASA Technical Reports Server (NTRS)
Finley, Tom D.
1991-01-01
A calibration technique is proposed that will allow the calibration of certain angular measurement devices without requiring the use of absolute standard. The technique assumes that the device to be calibrated has deterministic bias errors. A comparison device must be available that meets the same requirements. The two devices are compared; one device is then rotated with respect to the other, and a second comparison is performed. If the data are reduced using the described technique, the individual errors of the two devices can be determined.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne
2014-01-01
Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
NASA Astrophysics Data System (ADS)
Pieper, Michael; Manolakis, Dimitris; Truslow, Eric; Cooley, Thomas; Brueggeman, Michael; Jacobson, John; Weisner, Andrew
2017-08-01
Accurate estimation or retrieval of surface emissivity from long-wave infrared or thermal infrared (TIR) hyperspectral imaging data acquired by airborne or spaceborne sensors is necessary for many scientific and defense applications. This process consists of two interwoven steps: atmospheric compensation and temperature-emissivity separation (TES). The most widely used TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the atmospheric transmission function. We develop a model to explain and evaluate the performance of TES algorithms using a smoothing approach. Based on this model, we identify three sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise. For each TES smoothing technique, we analyze the bias and variability of the temperature errors, which translate to emissivity errors. The performance model explains how the errors interact to generate temperature errors. Since we assume exact knowledge of the atmosphere, the presented results provide an upper bound on the performance of TES algorithms based on the smoothness assumption.
Uncertainty analysis technique for OMEGA Dante measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M. J.; Widmann, K.; Sorce, C.
2010-10-15
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Uncertainty Analysis Technique for OMEGA Dante Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M J; Widmann, K; Sorce, C
2010-05-07
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Robust video super-resolution with registration efficiency adaptation
NASA Astrophysics Data System (ADS)
Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen
2010-07-01
Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1993-01-01
The results included in the Ph.D. dissertation of Dr. Fu Quan Wang, who was supported by the grant as a Research Assistant from January 1989 through December 1992 are discussed. The sections contain a brief summary of the important aspects of this dissertation, which include: (1) erasurefree sequential decoding of trellis codes; (2) probabilistic construction of trellis codes; (3) construction of robustly good trellis codes; and (4) the separability of shaping and coding.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1989-01-01
The performance of bandwidth efficient trellis codes on channels with phase jitter, or those disturbed by jamming and impulse noise is analyzed. An heuristic algorithm for construction of bandwidth efficient trellis codes with any constraint length up to about 30, any signal constellation, and any code rate was developed. Construction of good distance profile trellis codes for sequential decoding and comparison of random coding bounds of trellis coded modulation schemes are also discussed.
Temperature control in a solar collector field using Filtered Dynamic Matrix Control.
Lima, Daniel Martins; Normey-Rico, Julio Elias; Santos, Tito Luís Maia
2016-05-01
This paper presents the output temperature control of a solar collector field of a desalinization plant using the Filtered Dynamic Matrix Control (FDMC). The FDMC is a modified controller based on the Dynamic Matrix Control (DMC), a predictive control strategy widely used in industry. In the FDMC, a filter is used in the prediction error, which allows the modification of the robustness and disturbance rejection characteristics of the original algorithm. The implementation and tuning of the FDMC are simple and maintain the advantages of DMC. Several simulation results using a validated model of the solar plant are presented considering different scenarios. The results are also compared to nonlinear control techniques, showing that FDMC, if properly tuned, can yield similar results to more complex control algorithms. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Price, Owen; Baker, John; Bee, Penny; Lovell, Karina
2018-01-01
De-escalation techniques are recommended to manage violence and aggression in mental health settings yet restrictive practices continue to be frequently used. Barriers and enablers to the implementation and effectiveness of de-escalation techniques in practice are not well understood. To obtain staff descriptions of de-escalation techniques currently used in mental health settings and explore factors perceived to influence their implementation and effectiveness. Qualitative, semi-structured interviews and Framework Analysis. Five in-patient wards including three male psychiatric intensive care units, one female acute ward and one male acute ward in three UK Mental Health NHS Trusts. 20 ward-based clinical staff. Individual semi-structured interviews were digitally recorded, transcribed verbatim and analysed using a qualitative data analysis software package. Participants described 14 techniques used in response to escalated aggression applied on a continuum between support and control. Techniques along the support-control continuum could be classified in three groups: 'support' (e.g. problem-solving, distraction, reassurance) 'non-physical control' (e.g. reprimands, deterrents, instruction) and 'physical control' (e.g. physical restraint and seclusion). Charting the reasoning staff provided for technique selection against the described behavioural outcome enabled a preliminary understanding of staff, patient and environmental influences on de-escalation success or failure. Importantly, the more coercive 'non-physical control' techniques are currently conceptualised by staff as a feature of de-escalation techniques, yet, there was evidence of a link between these and increased aggression/use of restrictive practices. Risk was not a consistent factor in decisions to adopt more controlling techniques. Moral judgements regarding the function of the aggression; trial-and-error; ingrained local custom (especially around instruction to low stimulus areas); knowledge of the patient; time-efficiency and staff anxiety had a key role in escalating intervention. This paper provides a new model for understanding staff intervention in response to escalated aggression, a continuum between support and control. It further provides a preliminary explanatory framework for understanding the relationship between patient behaviour, staff response and environmental influences on de-escalation success and failure. This framework reveals potentially important behaviour change targets for interventions seeking to reduce violence and use of restrictive practices through enhanced de-escalation techniques. Copyright © 2017 Elsevier Ltd. All rights reserved.
Model predictive and reallocation problem for CubeSat fault recovery and attitude control
NASA Astrophysics Data System (ADS)
Franchi, Loris; Feruglio, Lorenzo; Mozzillo, Raffaele; Corpino, Sabrina
2018-01-01
In recent years, thanks to the increase of the know-how on machine-learning techniques and the advance of the computational capabilities of on-board processing, expensive computing algorithms, such as Model Predictive Control, have begun to spread in space applications even on small on-board processor. The paper presents an algorithm for an optimal fault recovery of a 3U CubeSat, developed in MathWorks Matlab & Simulink environment. This algorithm involves optimization techniques aiming at obtaining the optimal recovery solution, and involves a Model Predictive Control approach for the attitude control. The simulated system is a CubeSat in Low Earth Orbit: the attitude control is performed with three magnetic torquers and a single reaction wheel. The simulation neglects the errors in the attitude determination of the satellite, and focuses on the recovery approach and control method. The optimal recovery approach takes advantage of the properties of magnetic actuation, which gives the possibility of the redistribution of the control action when a fault occurs on a single magnetic torquer, even in absence of redundant actuators. In addition, the paper presents the results of the implementation of Model Predictive approach to control the attitude of the satellite.
Finite Energy and Bounded Attacks on Control System Sensor Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djouadi, Seddik M; Melin, Alexander M; Ferragut, Erik M
Control system networks are increasingly being connected to enterprise level networks. These connections leave critical industrial controls systems vulnerable to cyber-attacks. Most of the effort in protecting these cyber-physical systems (CPS) has been in securing the networks using information security techniques and protection and reliability concerns at the control system level against random hardware and software failures. However, besides these failures the inability of information security techniques to protect against all intrusions means that the control system must be resilient to various signal attacks for which new analysis and detection methods need to be developed. In this paper, sensor signalmore » attacks are analyzed for observer-based controlled systems. The threat surface for sensor signal attacks is subdivided into denial of service, finite energy, and bounded attacks. In particular, the error signals between states of attack free systems and systems subject to these attacks are quantified. Optimal sensor and actuator signal attacks for the finite and infinite horizon linear quadratic (LQ) control in terms of maximizing the corresponding cost functions are computed. The closed-loop system under optimal signal attacks are provided. Illustrative numerical examples are provided together with an application to a power network with distributed LQ controllers.« less
Adaptive Wavelet Coding Applied in a Wireless Control System.
Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O
2017-12-13
Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.
Koch, Christian
2010-05-01
A technique for the calibration of photodiodes in ultrasonic measurement systems using standard and cost-effective optical and electronic components is presented. A heterodyne system was realized using two commercially available distributed feedback lasers, and the required frequency stability and resolution were ensured by a difference-frequency servo control scheme. The frequency-sensitive element generating the error signal for the servo loop comprised a delay-line discriminator constructed from electronic elements. Measurements were carried out at up to 450 MHz, and the uncertainties of about 5% (k = 2) can be further reduced by improved radio frequency power measurement without losing the feature of using only simple elements. The technique initially dedicated to the determination of the frequency response of photodetectors applied in ultrasonic applications can be transferred to other application fields of optical measurements.
Maricoto, Tiago; Madanelo, Sofia; Rodrigues, Luís; Teixeira, Gilberto; Valente, Carla; Andrade, Lília; Saraiva, Alcina
2016-01-01
To assess the impact that educational interventions to improve inhaler techniques have on the clinical and functional control of asthma and COPD, we evaluated 44 participants before and after such an intervention. There was a significant decrease in the number of errors, and 20 patients (46%) significantly improved their technique regarding prior exhalation and breath hold. In the asthma group, there were significant improvements in the mean FEV1, FVC, and PEF (of 6.4%, 8.6%, and 8.3% respectively). Those improvements were accompanied by improvements in Control of Allergic Rhinitis and Asthma Test scores but not in Asthma Control Test scores. In the COPD group, there were no significant variations. In asthma patients, educational interventions appear to improve inhaler technique, clinical control, and functional control. RESUMO Para avaliar o impacto do ensino da técnica inalatória no controle clínico e funcional de pacientes com asma ou DPOC, incluíram-se 44 participantes antes e após essa intervenção. Houve uma diminuição significativa no número de erros cometidos, sendo que 20 pacientes (46%) melhoraram significativamente sua técnica na expiração prévia e apneia final. No grupo asma, houve significativa melhora nas médias de FEV1 (6,4%), CVF (8,6%) e PFE (8,3%), e essa melhora correlacionou-se com os resultados no Control of Allergic Rhinitis and Asthma Test, mas não com os do Asthma Control Test. No grupo DPOC, não houve variações significativas. O ensino da técnica inalatória parece melhorar seu desempenho e os controles clínico e funcional em pacientes com asma.
Linearizing feedforward/feedback attitude control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1991-01-01
An approach to attitude control theory is introduced in which a linear form is postulated for the closed-loop rotation error dynamics, then the exact control law required to realize it is derived. The nonminimal (four-component) quaternion form is used to attitude because it is globally nonsingular, but the minimal (three-component) quaternion form is used for attitude error because it has no nonlinear constraints to prevent the rotational error dynamics from being linearized, and the definition of the attitude error is based on quaternion algebra. This approach produces an attitude control law that linearizes the closed-loop rotational error dynamics exactly, without any attitude singularities, even if the control errors become large.
Real-time auto-adaptive margin generation for MLC-tracked radiotherapy
NASA Astrophysics Data System (ADS)
Glitzner, M.; Fast, M. F.; de Senneville, B. Denis; Nill, S.; Oelfke, U.; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.
2017-01-01
In radiotherapy, abdominal and thoracic sites are candidates for performing motion tracking. With real-time control it is possible to adjust the multileaf collimator (MLC) position to the target position. However, positions are not perfectly matched and position errors arise from system delays and complicated response of the electromechanic MLC system. Although, it is possible to compensate parts of these errors by using predictors, residual errors remain and need to be compensated to retain target coverage. This work presents a method to statistically describe tracking errors and to automatically derive a patient-specific, per-segment margin to compensate the arising underdosage on-line, i.e. during plan delivery. The statistics of the geometric error between intended and actual machine position are derived using kernel density estimators. Subsequently a margin is calculated on-line according to a selected coverage parameter, which determines the amount of accepted underdosage. The margin is then applied onto the actual segment to accommodate the positioning errors in the enlarged segment. The proof-of-concept was tested in an on-line tracking experiment and showed the ability to recover underdosages for two test cases, increasing {{V}90 %} in the underdosed area about 47 % and 41 % , respectively. The used dose model was able to predict the loss of dose due to tracking errors and could be used to infer the necessary margins. The implementation had a running time of 23 ms which is compatible with real-time requirements of MLC tracking systems. The auto-adaptivity to machine and patient characteristics makes the technique a generic yet intuitive candidate to avoid underdosages due to MLC tracking errors.
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Iliff, Kenneth
2002-01-01
A maximum-likelihood output-error parameter estimation technique is used to obtain stability and control derivatives for the NASA Dryden Flight Research Center SR-71A airplane and for configurations that include experiments externally mounted to the top of the fuselage. This research is being done as part of the envelope clearance for the new experiment configurations. Flight data are obtained at speeds ranging from Mach 0.4 to Mach 3.0, with an extensive amount of test points at approximately Mach 1.0. Pilot-input pitch and yaw-roll doublets are used to obtain the data. This report defines the parameter estimation technique used, presents stability and control derivative results, and compares the derivatives for the three configurations tested. The experimental configurations studied generally show acceptable stability, control, trim, and handling qualities throughout the Mach regimes tested. The reduction of directional stability for the experimental configurations is the most significant aerodynamic effect measured and identified as a design constraint for future experimental configurations. This report also shows the significant effects of aircraft flexibility on the stability and control derivatives.
Lateral control system design for VTOL landing on a DD963 in high sea states. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bodson, M.
1982-01-01
The problem of designing lateral control systems for the safe landing of VTOL aircraft on small ships is addressed. A ship model is derived. The issues of estimation and prediction of ship motions are discussed, using optimal linear linear estimation techniques. The roll motion is the most important of the lateral motions, and it is found that it can be predicted for up to 10 seconds in perfect conditions. The automatic landing of the VTOL aircraft is considered, and a lateral controller, defined as a ship motion tracker, is designed, using optimal control techniqes. The tradeoffs between the tracking errors and the control authority are obtained. The important couplings between the lateral motions and controls are demonstrated, and it is shown that the adverse couplings between the sway and the roll motion at the landing pad are significant constraints in the tracking of the lateral ship motions. The robustness of the control system, including the optimal estimator, is studied, using the singular values analysis. Through a robustification procedure, a robust control system is obtained, and the usefulness of the singular values to define stability margins that take into account general types of unstructured modelling errors is demonstrated. The minimal destabilizing perturbations indicated by the singular values analysis are interpreted and related to the multivariable Nyquist diagrams.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.
Surgical Options for the Refractive Correction of Keratoconus: Myth or Reality
Zaldivar, R.; Aiello, F.; Madrid-Costa, D.
2017-01-01
Keratoconus provides a decrease of quality of life to the patients who suffer from it. The treatment used as well as the method to correct the refractive error of these patients may influence on the impact of the disease on their quality of life. The purpose of this review is to describe the evidence about the conservative surgical treatment for keratoconus aiming to therapeutic and refractive effect. The visual rehabilitation for keratoconic corneas requires addressing three concerns: halting the ectatic process, improving corneal shape, and minimizing the residual refractive error. Cross-linking can halt the disease progression, intrastromal corneal ring segments can improve the corneal shape and hence the visual quality and reduce the refractive error, PRK can correct mild-moderate refractive error, and intraocular lenses can correct from low to high refractive error associated with keratoconus. Any of these surgical options can be performed alone or combined with the other techniques depending on what the case requires. Although it could be considered that the surgical option for the refracto-therapeutic treatment of the keratoconus is a reality, controlled, randomized studies with larger cohorts and longer follow-up periods are needed to determine which refractive procedure and/or sequence are most suitable for each case. PMID:29403662
Optimization of the structural and control system for LSS with reduced-order model
NASA Technical Reports Server (NTRS)
Khot, N. S.
1989-01-01
The objective is the simultaneous design of the structural and control system for space structures. The minimum weight of the structure is the objective function, and the constraints are placed on the closed loop distribution of the frequencies and the damping parameters. The controls approach used is linear quadratic regulator with constant feedback. A reduced-order control system is used. The effect of uncontrolled modes is taken into consideration by the model error sensitivity suppression (MESS) technique which modified the weighting parameters for the control forces. For illustration, an ACOSS-FOUR structure is designed for a different number of controlled modes with specified values for the closed loop damping parameters and frequencies. The dynamic response of the optimum designs for an initial disturbance is compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogson, EM; Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW; Ingham Institute for Applied Medical Research, Sydney, NSW
Purpose: To identify the robustness of different treatment techniques in respect to simulated linac errors on the dose distribution to the target volume and organs at risk for step and shoot IMRT (ssIMRT), VMAT and Autoplan generated VMAT nasopharynx plans. Methods: A nasopharynx patient dataset was retrospectively replanned with three different techniques: 7 beam ssIMRT, one arc manual generated VMAT and one arc automatically generated VMAT. Treatment simulated uncertainties: gantry, collimator, MLC field size and MLC shifts, were introduced into these plans at increments of 5,2,1,−1,−2 and −5 (degrees or mm) and recalculated in Pinnacle. The mean and maximum dosesmore » were calculated for the high dose PTV, parotids, brainstem, and spinal cord and then compared to the original baseline plan. Results: Simulated gantry angle errors have <1% effect on the PTV, ssIMRT is most sensitive. The small collimator errors (±1 and ±2 degrees) impacted the mean PTV dose by <2% for all techniques, however for the ±5 degree errors mean target varied by up to 7% for the Autoplan VMAT and 10% for the max dose to the spinal cord and brain stem, seen in all techniques. The simulated MLC shifts introduced the largest errors for the Autoplan VMAT, with the larger MLC modulation presumably being the cause. The most critical error observed, was the MLC field size error, where even small errors of 1 mm, caused significant changes to both the PTV and the OAR. The ssIMRT is the least sensitive and the Autoplan the most sensitive, with target errors of up to 20% over and under dosages observed. Conclusion: For a nasopharynx patient the plan robustness observed is highest for the ssIMRT plan and lowest for the Autoplan generated VMAT plan. This could be caused by the more complex MLC modulation seen for the VMAT plans. This project is supported by a grant from NSW Cancer Council.« less
Sigma Metrics Across the Total Testing Process.
Charuruks, Navapun
2017-03-01
Laboratory quality control has been developed for several decades to ensure patients' safety, from a statistical quality control focus on the analytical phase to total laboratory processes. The sigma concept provides a convenient way to quantify the number of errors in extra-analytical and analytical phases through the defect per million and sigma metric equation. Participation in a sigma verification program can be a convenient way to monitor analytical performance continuous quality improvement. Improvement of sigma-scale performance has been shown from our data. New tools and techniques for integration are needed. Copyright © 2016 Elsevier Inc. All rights reserved.